Test Report: KVM_Linux_crio 17527

                    
                      e4a17a129ff7b90db2cf05af39ada1ef78f6d52a:2023-10-31:31678
                    
                

Test fail (29/292)

Order failed test Duration
28 TestAddons/parallel/Ingress 158.63
41 TestAddons/StoppedEnableDisable 155.08
111 TestFunctional/parallel/ImageCommands/ImageListTable 2.53
112 TestFunctional/parallel/ImageCommands/ImageListJson 2.53
114 TestFunctional/parallel/ImageCommands/ImageBuild 6.37
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 167.44
205 TestMultiNode/serial/PingHostFrom2Pods 3.28
211 TestMultiNode/serial/RestartKeepsNodes 689.16
213 TestMultiNode/serial/StopMultiNode 143.11
220 TestPreload 284.44
226 TestRunningBinaryUpgrade 155.58
245 TestStoppedBinaryUpgrade/Upgrade 271.74
246 TestPause/serial/SecondStartNoReconfiguration 68.5
272 TestStartStop/group/old-k8s-version/serial/Stop 140.19
275 TestStartStop/group/no-preload/serial/Stop 140.11
278 TestStartStop/group/embed-certs/serial/Stop 139.75
281 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
285 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.53
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
288 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
292 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.45
293 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.3
294 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.33
295 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.34
296 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 467.23
297 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 529.16
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 148.8
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 60.54
x
+
TestAddons/parallel/Ingress (158.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-780757 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-780757 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-780757 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [285680b9-b8c5-4686-af4a-42f41f4f3218] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [285680b9-b8c5-4686-af4a-42f41f4f3218] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.02429962s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-780757 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.83075352s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-780757 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.172
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-780757 addons disable ingress-dns --alsologtostderr -v=1: (1.838756922s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-780757 addons disable ingress --alsologtostderr -v=1: (7.884779475s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-780757 -n addons-780757
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-780757 logs -n 25: (1.342152983s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-629575 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:01 UTC |                     |
	|         | -p download-only-629575                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:02 UTC | 30 Oct 23 23:02 UTC |
	| delete  | -p download-only-629575                                                                     | download-only-629575 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:02 UTC | 30 Oct 23 23:02 UTC |
	| delete  | -p download-only-629575                                                                     | download-only-629575 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:02 UTC | 30 Oct 23 23:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-415718 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:02 UTC |                     |
	|         | binary-mirror-415718                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:41209                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-415718                                                                     | binary-mirror-415718 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:02 UTC | 30 Oct 23 23:02 UTC |
	| addons  | disable dashboard -p                                                                        | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:02 UTC |                     |
	|         | addons-780757                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:02 UTC |                     |
	|         | addons-780757                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-780757 --wait=true                                                                | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:02 UTC | 30 Oct 23 23:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	|         | -p addons-780757                                                                            |                      |         |                |                     |                     |
	| ssh     | addons-780757 ssh cat                                                                       | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	|         | /opt/local-path-provisioner/pvc-32d74994-96d4-4338-bff6-25a7bc634797_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-780757 addons disable                                                                | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:05 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-780757 addons                                                                        | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-780757 ip                                                                            | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	| addons  | addons-780757 addons disable                                                                | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	|         | addons-780757                                                                               |                      |         |                |                     |                     |
	| addons  | addons-780757 addons disable                                                                | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	|         | -p addons-780757                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:04 UTC | 30 Oct 23 23:04 UTC |
	|         | addons-780757                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-780757 ssh curl -s                                                                   | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| addons  | addons-780757 addons                                                                        | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:05 UTC | 30 Oct 23 23:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-780757 addons                                                                        | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:05 UTC | 30 Oct 23 23:05 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-780757 ip                                                                            | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:07 UTC | 30 Oct 23 23:07 UTC |
	| addons  | addons-780757 addons disable                                                                | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:07 UTC | 30 Oct 23 23:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-780757 addons disable                                                                | addons-780757        | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:07 UTC | 30 Oct 23 23:07 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/30 23:02:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 23:02:04.017663  216380 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:02:04.017779  216380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:02:04.017788  216380 out.go:309] Setting ErrFile to fd 2...
	I1030 23:02:04.017793  216380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:02:04.017970  216380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:02:04.018544  216380 out.go:303] Setting JSON to false
	I1030 23:02:04.019378  216380 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24276,"bootTime":1698682648,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:02:04.019451  216380 start.go:138] virtualization: kvm guest
	I1030 23:02:04.021409  216380 out.go:177] * [addons-780757] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:02:04.022760  216380 out.go:177]   - MINIKUBE_LOCATION=17527
	I1030 23:02:04.024115  216380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:02:04.022783  216380 notify.go:220] Checking for updates...
	I1030 23:02:04.027207  216380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:02:04.028457  216380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:02:04.029724  216380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 23:02:04.031004  216380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 23:02:04.032460  216380 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:02:04.062753  216380 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 23:02:04.063978  216380 start.go:298] selected driver: kvm2
	I1030 23:02:04.063993  216380 start.go:902] validating driver "kvm2" against <nil>
	I1030 23:02:04.064003  216380 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 23:02:04.064694  216380 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:02:04.064777  216380 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 23:02:04.078702  216380 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1030 23:02:04.078760  216380 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1030 23:02:04.078993  216380 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 23:02:04.079057  216380 cni.go:84] Creating CNI manager for ""
	I1030 23:02:04.079074  216380 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 23:02:04.079093  216380 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 23:02:04.079107  216380 start_flags.go:323] config:
	{Name:addons-780757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-780757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:02:04.079270  216380 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:02:04.081064  216380 out.go:177] * Starting control plane node addons-780757 in cluster addons-780757
	I1030 23:02:04.082333  216380 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:02:04.082366  216380 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1030 23:02:04.082379  216380 cache.go:56] Caching tarball of preloaded images
	I1030 23:02:04.082458  216380 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 23:02:04.082473  216380 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1030 23:02:04.082801  216380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/config.json ...
	I1030 23:02:04.082827  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/config.json: {Name:mkdcf4abf58f6bf07fb7f55429ae427a24b5bfaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:04.082985  216380 start.go:365] acquiring machines lock for addons-780757: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:02:04.083056  216380 start.go:369] acquired machines lock for "addons-780757" in 54.336µs
	I1030 23:02:04.083075  216380 start.go:93] Provisioning new machine with config: &{Name:addons-780757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:addons-780757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 23:02:04.083174  216380 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 23:02:04.084696  216380 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1030 23:02:04.084823  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:02:04.084881  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:02:04.098011  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I1030 23:02:04.098526  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:02:04.099272  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:02:04.099296  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:02:04.099721  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:02:04.099922  216380 main.go:141] libmachine: (addons-780757) Calling .GetMachineName
	I1030 23:02:04.100049  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:02:04.100186  216380 start.go:159] libmachine.API.Create for "addons-780757" (driver="kvm2")
	I1030 23:02:04.100222  216380 client.go:168] LocalClient.Create starting
	I1030 23:02:04.100284  216380 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem
	I1030 23:02:04.174664  216380 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem
	I1030 23:02:04.233726  216380 main.go:141] libmachine: Running pre-create checks...
	I1030 23:02:04.233752  216380 main.go:141] libmachine: (addons-780757) Calling .PreCreateCheck
	I1030 23:02:04.234298  216380 main.go:141] libmachine: (addons-780757) Calling .GetConfigRaw
	I1030 23:02:04.234891  216380 main.go:141] libmachine: Creating machine...
	I1030 23:02:04.234918  216380 main.go:141] libmachine: (addons-780757) Calling .Create
	I1030 23:02:04.235130  216380 main.go:141] libmachine: (addons-780757) Creating KVM machine...
	I1030 23:02:04.236521  216380 main.go:141] libmachine: (addons-780757) DBG | found existing default KVM network
	I1030 23:02:04.237397  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:04.237176  216403 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I1030 23:02:04.242824  216380 main.go:141] libmachine: (addons-780757) DBG | trying to create private KVM network mk-addons-780757 192.168.39.0/24...
	I1030 23:02:04.314062  216380 main.go:141] libmachine: (addons-780757) DBG | private KVM network mk-addons-780757 192.168.39.0/24 created
	I1030 23:02:04.314102  216380 main.go:141] libmachine: (addons-780757) Setting up store path in /home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757 ...
	I1030 23:02:04.314119  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:04.313980  216403 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:02:04.314170  216380 main.go:141] libmachine: (addons-780757) Building disk image from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso
	I1030 23:02:04.314197  216380 main.go:141] libmachine: (addons-780757) Downloading /home/jenkins/minikube-integration/17527-208817/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso...
	I1030 23:02:04.549672  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:04.549509  216403 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa...
	I1030 23:02:04.852442  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:04.852285  216403 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/addons-780757.rawdisk...
	I1030 23:02:04.852490  216380 main.go:141] libmachine: (addons-780757) DBG | Writing magic tar header
	I1030 23:02:04.852506  216380 main.go:141] libmachine: (addons-780757) DBG | Writing SSH key tar header
	I1030 23:02:04.852520  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:04.852473  216403 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757 ...
	I1030 23:02:04.852652  216380 main.go:141] libmachine: (addons-780757) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757
	I1030 23:02:04.852688  216380 main.go:141] libmachine: (addons-780757) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines
	I1030 23:02:04.852706  216380 main.go:141] libmachine: (addons-780757) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757 (perms=drwx------)
	I1030 23:02:04.852721  216380 main.go:141] libmachine: (addons-780757) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:02:04.852739  216380 main.go:141] libmachine: (addons-780757) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817
	I1030 23:02:04.852753  216380 main.go:141] libmachine: (addons-780757) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 23:02:04.852772  216380 main.go:141] libmachine: (addons-780757) DBG | Checking permissions on dir: /home/jenkins
	I1030 23:02:04.852787  216380 main.go:141] libmachine: (addons-780757) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines (perms=drwxr-xr-x)
	I1030 23:02:04.852803  216380 main.go:141] libmachine: (addons-780757) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube (perms=drwxr-xr-x)
	I1030 23:02:04.852817  216380 main.go:141] libmachine: (addons-780757) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817 (perms=drwxrwxr-x)
	I1030 23:02:04.852831  216380 main.go:141] libmachine: (addons-780757) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 23:02:04.852840  216380 main.go:141] libmachine: (addons-780757) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 23:02:04.852852  216380 main.go:141] libmachine: (addons-780757) Creating domain...
	I1030 23:02:04.852866  216380 main.go:141] libmachine: (addons-780757) DBG | Checking permissions on dir: /home
	I1030 23:02:04.852883  216380 main.go:141] libmachine: (addons-780757) DBG | Skipping /home - not owner
	I1030 23:02:04.854036  216380 main.go:141] libmachine: (addons-780757) define libvirt domain using xml: 
	I1030 23:02:04.854060  216380 main.go:141] libmachine: (addons-780757) <domain type='kvm'>
	I1030 23:02:04.854068  216380 main.go:141] libmachine: (addons-780757)   <name>addons-780757</name>
	I1030 23:02:04.854074  216380 main.go:141] libmachine: (addons-780757)   <memory unit='MiB'>4000</memory>
	I1030 23:02:04.854081  216380 main.go:141] libmachine: (addons-780757)   <vcpu>2</vcpu>
	I1030 23:02:04.854086  216380 main.go:141] libmachine: (addons-780757)   <features>
	I1030 23:02:04.854091  216380 main.go:141] libmachine: (addons-780757)     <acpi/>
	I1030 23:02:04.854096  216380 main.go:141] libmachine: (addons-780757)     <apic/>
	I1030 23:02:04.854102  216380 main.go:141] libmachine: (addons-780757)     <pae/>
	I1030 23:02:04.854106  216380 main.go:141] libmachine: (addons-780757)     
	I1030 23:02:04.854112  216380 main.go:141] libmachine: (addons-780757)   </features>
	I1030 23:02:04.854120  216380 main.go:141] libmachine: (addons-780757)   <cpu mode='host-passthrough'>
	I1030 23:02:04.854126  216380 main.go:141] libmachine: (addons-780757)   
	I1030 23:02:04.854131  216380 main.go:141] libmachine: (addons-780757)   </cpu>
	I1030 23:02:04.854142  216380 main.go:141] libmachine: (addons-780757)   <os>
	I1030 23:02:04.854152  216380 main.go:141] libmachine: (addons-780757)     <type>hvm</type>
	I1030 23:02:04.854160  216380 main.go:141] libmachine: (addons-780757)     <boot dev='cdrom'/>
	I1030 23:02:04.854165  216380 main.go:141] libmachine: (addons-780757)     <boot dev='hd'/>
	I1030 23:02:04.854174  216380 main.go:141] libmachine: (addons-780757)     <bootmenu enable='no'/>
	I1030 23:02:04.854184  216380 main.go:141] libmachine: (addons-780757)   </os>
	I1030 23:02:04.854192  216380 main.go:141] libmachine: (addons-780757)   <devices>
	I1030 23:02:04.854197  216380 main.go:141] libmachine: (addons-780757)     <disk type='file' device='cdrom'>
	I1030 23:02:04.854208  216380 main.go:141] libmachine: (addons-780757)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/boot2docker.iso'/>
	I1030 23:02:04.854217  216380 main.go:141] libmachine: (addons-780757)       <target dev='hdc' bus='scsi'/>
	I1030 23:02:04.854224  216380 main.go:141] libmachine: (addons-780757)       <readonly/>
	I1030 23:02:04.854229  216380 main.go:141] libmachine: (addons-780757)     </disk>
	I1030 23:02:04.854266  216380 main.go:141] libmachine: (addons-780757)     <disk type='file' device='disk'>
	I1030 23:02:04.854292  216380 main.go:141] libmachine: (addons-780757)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 23:02:04.854309  216380 main.go:141] libmachine: (addons-780757)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/addons-780757.rawdisk'/>
	I1030 23:02:04.854321  216380 main.go:141] libmachine: (addons-780757)       <target dev='hda' bus='virtio'/>
	I1030 23:02:04.854328  216380 main.go:141] libmachine: (addons-780757)     </disk>
	I1030 23:02:04.854336  216380 main.go:141] libmachine: (addons-780757)     <interface type='network'>
	I1030 23:02:04.854353  216380 main.go:141] libmachine: (addons-780757)       <source network='mk-addons-780757'/>
	I1030 23:02:04.854366  216380 main.go:141] libmachine: (addons-780757)       <model type='virtio'/>
	I1030 23:02:04.854396  216380 main.go:141] libmachine: (addons-780757)     </interface>
	I1030 23:02:04.854416  216380 main.go:141] libmachine: (addons-780757)     <interface type='network'>
	I1030 23:02:04.854423  216380 main.go:141] libmachine: (addons-780757)       <source network='default'/>
	I1030 23:02:04.854429  216380 main.go:141] libmachine: (addons-780757)       <model type='virtio'/>
	I1030 23:02:04.854435  216380 main.go:141] libmachine: (addons-780757)     </interface>
	I1030 23:02:04.854442  216380 main.go:141] libmachine: (addons-780757)     <serial type='pty'>
	I1030 23:02:04.854448  216380 main.go:141] libmachine: (addons-780757)       <target port='0'/>
	I1030 23:02:04.854456  216380 main.go:141] libmachine: (addons-780757)     </serial>
	I1030 23:02:04.854462  216380 main.go:141] libmachine: (addons-780757)     <console type='pty'>
	I1030 23:02:04.854470  216380 main.go:141] libmachine: (addons-780757)       <target type='serial' port='0'/>
	I1030 23:02:04.854476  216380 main.go:141] libmachine: (addons-780757)     </console>
	I1030 23:02:04.854487  216380 main.go:141] libmachine: (addons-780757)     <rng model='virtio'>
	I1030 23:02:04.854503  216380 main.go:141] libmachine: (addons-780757)       <backend model='random'>/dev/random</backend>
	I1030 23:02:04.854516  216380 main.go:141] libmachine: (addons-780757)     </rng>
	I1030 23:02:04.854530  216380 main.go:141] libmachine: (addons-780757)     
	I1030 23:02:04.854542  216380 main.go:141] libmachine: (addons-780757)     
	I1030 23:02:04.854554  216380 main.go:141] libmachine: (addons-780757)   </devices>
	I1030 23:02:04.854565  216380 main.go:141] libmachine: (addons-780757) </domain>
	I1030 23:02:04.854582  216380 main.go:141] libmachine: (addons-780757) 
	I1030 23:02:04.858970  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:53:e4:63 in network default
	I1030 23:02:04.859625  216380 main.go:141] libmachine: (addons-780757) Ensuring networks are active...
	I1030 23:02:04.859651  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:04.860426  216380 main.go:141] libmachine: (addons-780757) Ensuring network default is active
	I1030 23:02:04.860711  216380 main.go:141] libmachine: (addons-780757) Ensuring network mk-addons-780757 is active
	I1030 23:02:04.861213  216380 main.go:141] libmachine: (addons-780757) Getting domain xml...
	I1030 23:02:04.861903  216380 main.go:141] libmachine: (addons-780757) Creating domain...
	I1030 23:02:06.074540  216380 main.go:141] libmachine: (addons-780757) Waiting to get IP...
	I1030 23:02:06.075226  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:06.075636  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:06.075763  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:06.075635  216403 retry.go:31] will retry after 227.985489ms: waiting for machine to come up
	I1030 23:02:06.305200  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:06.305594  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:06.305625  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:06.305530  216403 retry.go:31] will retry after 316.713713ms: waiting for machine to come up
	I1030 23:02:06.624034  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:06.624433  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:06.624461  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:06.624385  216403 retry.go:31] will retry after 417.511081ms: waiting for machine to come up
	I1030 23:02:07.044024  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:07.044501  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:07.044526  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:07.044451  216403 retry.go:31] will retry after 464.237686ms: waiting for machine to come up
	I1030 23:02:07.509856  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:07.510269  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:07.510311  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:07.510227  216403 retry.go:31] will retry after 586.404238ms: waiting for machine to come up
	I1030 23:02:08.098065  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:08.098531  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:08.098559  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:08.098503  216403 retry.go:31] will retry after 948.533613ms: waiting for machine to come up
	I1030 23:02:09.048593  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:09.049131  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:09.049162  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:09.049074  216403 retry.go:31] will retry after 895.554646ms: waiting for machine to come up
	I1030 23:02:09.947115  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:09.947590  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:09.947616  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:09.947520  216403 retry.go:31] will retry after 1.279724521s: waiting for machine to come up
	I1030 23:02:11.228380  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:11.228918  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:11.228973  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:11.228860  216403 retry.go:31] will retry after 1.80900708s: waiting for machine to come up
	I1030 23:02:13.040167  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:13.040683  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:13.040713  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:13.040602  216403 retry.go:31] will retry after 1.648587681s: waiting for machine to come up
	I1030 23:02:14.691282  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:14.691706  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:14.691733  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:14.691678  216403 retry.go:31] will retry after 2.575631324s: waiting for machine to come up
	I1030 23:02:17.270419  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:17.270810  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:17.270847  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:17.270733  216403 retry.go:31] will retry after 2.689374318s: waiting for machine to come up
	I1030 23:02:19.961629  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:19.962051  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:19.962082  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:19.962004  216403 retry.go:31] will retry after 3.030728922s: waiting for machine to come up
	I1030 23:02:22.993957  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:22.994362  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find current IP address of domain addons-780757 in network mk-addons-780757
	I1030 23:02:22.994386  216380 main.go:141] libmachine: (addons-780757) DBG | I1030 23:02:22.994309  216403 retry.go:31] will retry after 3.977684449s: waiting for machine to come up
	I1030 23:02:26.973169  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:26.973609  216380 main.go:141] libmachine: (addons-780757) Found IP for machine: 192.168.39.172
	I1030 23:02:26.973631  216380 main.go:141] libmachine: (addons-780757) Reserving static IP address...
	I1030 23:02:26.973647  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has current primary IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:26.974165  216380 main.go:141] libmachine: (addons-780757) DBG | unable to find host DHCP lease matching {name: "addons-780757", mac: "52:54:00:29:88:e5", ip: "192.168.39.172"} in network mk-addons-780757
	I1030 23:02:27.047021  216380 main.go:141] libmachine: (addons-780757) DBG | Getting to WaitForSSH function...
	I1030 23:02:27.047065  216380 main.go:141] libmachine: (addons-780757) Reserved static IP address: 192.168.39.172
	I1030 23:02:27.047115  216380 main.go:141] libmachine: (addons-780757) Waiting for SSH to be available...
	I1030 23:02:27.049879  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.050337  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:27.050363  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.050563  216380 main.go:141] libmachine: (addons-780757) DBG | Using SSH client type: external
	I1030 23:02:27.050576  216380 main.go:141] libmachine: (addons-780757) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa (-rw-------)
	I1030 23:02:27.050596  216380 main.go:141] libmachine: (addons-780757) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 23:02:27.050615  216380 main.go:141] libmachine: (addons-780757) DBG | About to run SSH command:
	I1030 23:02:27.050628  216380 main.go:141] libmachine: (addons-780757) DBG | exit 0
	I1030 23:02:27.140449  216380 main.go:141] libmachine: (addons-780757) DBG | SSH cmd err, output: <nil>: 
	I1030 23:02:27.140686  216380 main.go:141] libmachine: (addons-780757) KVM machine creation complete!
	I1030 23:02:27.141019  216380 main.go:141] libmachine: (addons-780757) Calling .GetConfigRaw
	I1030 23:02:27.141587  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:02:27.141805  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:02:27.141990  216380 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 23:02:27.142005  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:02:27.143354  216380 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 23:02:27.143373  216380 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 23:02:27.143380  216380 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 23:02:27.143390  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:27.145667  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.145992  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:27.146026  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.146111  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:27.146290  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.146447  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.146602  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:27.146762  216380 main.go:141] libmachine: Using SSH client type: native
	I1030 23:02:27.147165  216380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1030 23:02:27.147180  216380 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 23:02:27.263754  216380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:02:27.263778  216380 main.go:141] libmachine: Detecting the provisioner...
	I1030 23:02:27.263787  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:27.266454  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.266784  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:27.266807  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.266946  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:27.267142  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.267295  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.267437  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:27.267585  216380 main.go:141] libmachine: Using SSH client type: native
	I1030 23:02:27.267896  216380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1030 23:02:27.267908  216380 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 23:02:27.385672  216380 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gea8740b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1030 23:02:27.385793  216380 main.go:141] libmachine: found compatible host: buildroot
	I1030 23:02:27.385808  216380 main.go:141] libmachine: Provisioning with buildroot...
	I1030 23:02:27.385823  216380 main.go:141] libmachine: (addons-780757) Calling .GetMachineName
	I1030 23:02:27.386132  216380 buildroot.go:166] provisioning hostname "addons-780757"
	I1030 23:02:27.386166  216380 main.go:141] libmachine: (addons-780757) Calling .GetMachineName
	I1030 23:02:27.386373  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:27.388933  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.389265  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:27.389312  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.389504  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:27.389740  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.389930  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.390124  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:27.390308  216380 main.go:141] libmachine: Using SSH client type: native
	I1030 23:02:27.390636  216380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1030 23:02:27.390650  216380 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-780757 && echo "addons-780757" | sudo tee /etc/hostname
	I1030 23:02:27.521764  216380 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-780757
	
	I1030 23:02:27.521821  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:27.524502  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.524928  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:27.524983  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.525092  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:27.525328  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.525506  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.525702  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:27.525836  216380 main.go:141] libmachine: Using SSH client type: native
	I1030 23:02:27.526197  216380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1030 23:02:27.526224  216380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-780757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-780757/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-780757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 23:02:27.653196  216380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:02:27.653229  216380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1030 23:02:27.653257  216380 buildroot.go:174] setting up certificates
	I1030 23:02:27.653271  216380 provision.go:83] configureAuth start
	I1030 23:02:27.653285  216380 main.go:141] libmachine: (addons-780757) Calling .GetMachineName
	I1030 23:02:27.653694  216380 main.go:141] libmachine: (addons-780757) Calling .GetIP
	I1030 23:02:27.656449  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.656828  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:27.656863  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.657001  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:27.659369  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.659707  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:27.659736  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.659868  216380 provision.go:138] copyHostCerts
	I1030 23:02:27.659977  216380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1030 23:02:27.660128  216380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1030 23:02:27.660184  216380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1030 23:02:27.660227  216380 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.addons-780757 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube addons-780757]
	I1030 23:02:27.943193  216380 provision.go:172] copyRemoteCerts
	I1030 23:02:27.943296  216380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 23:02:27.943326  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:27.946180  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.946631  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:27.946677  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:27.946869  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:27.947063  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:27.947249  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:27.947370  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:02:28.035889  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1030 23:02:28.060826  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 23:02:28.086169  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1030 23:02:28.110508  216380 provision.go:86] duration metric: configureAuth took 457.221029ms
	I1030 23:02:28.110538  216380 buildroot.go:189] setting minikube options for container-runtime
	I1030 23:02:28.110782  216380 config.go:182] Loaded profile config "addons-780757": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:02:28.110886  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:28.113602  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.113919  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:28.113960  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.114134  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:28.114349  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:28.114526  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:28.114697  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:28.114852  216380 main.go:141] libmachine: Using SSH client type: native
	I1030 23:02:28.115306  216380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1030 23:02:28.115334  216380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 23:02:28.409068  216380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 23:02:28.409103  216380 main.go:141] libmachine: Checking connection to Docker...
	I1030 23:02:28.409140  216380 main.go:141] libmachine: (addons-780757) Calling .GetURL
	I1030 23:02:28.410640  216380 main.go:141] libmachine: (addons-780757) DBG | Using libvirt version 6000000
	I1030 23:02:28.412808  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.413244  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:28.413278  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.413472  216380 main.go:141] libmachine: Docker is up and running!
	I1030 23:02:28.413493  216380 main.go:141] libmachine: Reticulating splines...
	I1030 23:02:28.413517  216380 client.go:171] LocalClient.Create took 24.313266764s
	I1030 23:02:28.413549  216380 start.go:167] duration metric: libmachine.API.Create for "addons-780757" took 24.313361373s
	I1030 23:02:28.413566  216380 start.go:300] post-start starting for "addons-780757" (driver="kvm2")
	I1030 23:02:28.413580  216380 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 23:02:28.413610  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:02:28.413850  216380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 23:02:28.413890  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:28.415975  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.416304  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:28.416329  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.416431  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:28.416620  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:28.416762  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:28.416898  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:02:28.508152  216380 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 23:02:28.512575  216380 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 23:02:28.512590  216380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1030 23:02:28.512646  216380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1030 23:02:28.512718  216380 start.go:303] post-start completed in 99.142339ms
	I1030 23:02:28.512760  216380 main.go:141] libmachine: (addons-780757) Calling .GetConfigRaw
	I1030 23:02:28.513379  216380 main.go:141] libmachine: (addons-780757) Calling .GetIP
	I1030 23:02:28.515655  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.516038  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:28.516057  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.516303  216380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/config.json ...
	I1030 23:02:28.516481  216380 start.go:128] duration metric: createHost completed in 24.433295407s
	I1030 23:02:28.516507  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:28.518664  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.518966  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:28.518995  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.519129  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:28.519317  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:28.519464  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:28.519588  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:28.519732  216380 main.go:141] libmachine: Using SSH client type: native
	I1030 23:02:28.520036  216380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1030 23:02:28.520048  216380 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1030 23:02:28.637645  216380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698706948.616721557
	
	I1030 23:02:28.637675  216380 fix.go:206] guest clock: 1698706948.616721557
	I1030 23:02:28.637686  216380 fix.go:219] Guest: 2023-10-30 23:02:28.616721557 +0000 UTC Remote: 2023-10-30 23:02:28.516495976 +0000 UTC m=+24.546762498 (delta=100.225581ms)
	I1030 23:02:28.637752  216380 fix.go:190] guest clock delta is within tolerance: 100.225581ms
	I1030 23:02:28.637762  216380 start.go:83] releasing machines lock for "addons-780757", held for 24.554694925s
	I1030 23:02:28.637792  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:02:28.638095  216380 main.go:141] libmachine: (addons-780757) Calling .GetIP
	I1030 23:02:28.640759  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.641187  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:28.641244  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.641385  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:02:28.641792  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:02:28.642057  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:02:28.642214  216380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 23:02:28.642261  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:28.642280  216380 ssh_runner.go:195] Run: cat /version.json
	I1030 23:02:28.642297  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:02:28.644910  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.645255  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:28.645289  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.645311  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.645452  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:28.645645  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:28.645730  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:28.645767  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:28.645785  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:28.645948  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:02:28.645944  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:02:28.646134  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:02:28.646270  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:02:28.646415  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:02:28.730548  216380 ssh_runner.go:195] Run: systemctl --version
	I1030 23:02:28.766013  216380 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 23:02:28.918435  216380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 23:02:28.925355  216380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 23:02:28.925446  216380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 23:02:28.938678  216380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 23:02:28.938701  216380 start.go:472] detecting cgroup driver to use...
	I1030 23:02:28.938758  216380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 23:02:28.951517  216380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 23:02:28.964287  216380 docker.go:198] disabling cri-docker service (if available) ...
	I1030 23:02:28.964338  216380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 23:02:28.978290  216380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 23:02:28.991197  216380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 23:02:29.098627  216380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 23:02:29.210338  216380 docker.go:214] disabling docker service ...
	I1030 23:02:29.210412  216380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 23:02:29.224381  216380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 23:02:29.235738  216380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 23:02:29.358812  216380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 23:02:29.479759  216380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 23:02:29.492525  216380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 23:02:29.509055  216380 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1030 23:02:29.509128  216380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:02:29.517855  216380 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 23:02:29.517912  216380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:02:29.526390  216380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:02:29.535175  216380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:02:29.544030  216380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 23:02:29.553211  216380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 23:02:29.561006  216380 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 23:02:29.561056  216380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 23:02:29.573228  216380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 23:02:29.582102  216380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 23:02:29.700834  216380 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 23:02:29.872914  216380 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 23:02:29.873012  216380 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 23:02:29.878595  216380 start.go:540] Will wait 60s for crictl version
	I1030 23:02:29.878654  216380 ssh_runner.go:195] Run: which crictl
	I1030 23:02:29.882318  216380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 23:02:29.916845  216380 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1030 23:02:29.916975  216380 ssh_runner.go:195] Run: crio --version
	I1030 23:02:29.964463  216380 ssh_runner.go:195] Run: crio --version
	I1030 23:02:30.012611  216380 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1030 23:02:30.013855  216380 main.go:141] libmachine: (addons-780757) Calling .GetIP
	I1030 23:02:30.016588  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:30.016921  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:02:30.016965  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:02:30.017120  216380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 23:02:30.021119  216380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:02:30.033135  216380 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:02:30.033193  216380 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 23:02:30.065708  216380 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1030 23:02:30.065794  216380 ssh_runner.go:195] Run: which lz4
	I1030 23:02:30.070496  216380 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1030 23:02:30.074450  216380 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 23:02:30.074528  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1030 23:02:31.820222  216380 crio.go:444] Took 1.749756 seconds to copy over tarball
	I1030 23:02:31.820310  216380 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 23:02:34.752876  216380 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.932537243s)
	I1030 23:02:34.752914  216380 crio.go:451] Took 2.932662 seconds to extract the tarball
	I1030 23:02:34.752928  216380 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 23:02:34.796879  216380 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 23:02:34.878224  216380 crio.go:496] all images are preloaded for cri-o runtime.
	I1030 23:02:34.878250  216380 cache_images.go:84] Images are preloaded, skipping loading
	I1030 23:02:34.878317  216380 ssh_runner.go:195] Run: crio config
	I1030 23:02:34.946381  216380 cni.go:84] Creating CNI manager for ""
	I1030 23:02:34.946402  216380 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 23:02:34.946423  216380 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1030 23:02:34.946446  216380 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-780757 NodeName:addons-780757 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 23:02:34.946588  216380 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-780757"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 23:02:34.946654  216380 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-780757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-780757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1030 23:02:34.946705  216380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1030 23:02:34.956789  216380 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 23:02:34.956845  216380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 23:02:34.965997  216380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1030 23:02:34.982118  216380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 23:02:34.997339  216380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1030 23:02:35.013001  216380 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1030 23:02:35.016621  216380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:02:35.027911  216380 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757 for IP: 192.168.39.172
	I1030 23:02:35.027962  216380 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.028159  216380 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1030 23:02:35.256431  216380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt ...
	I1030 23:02:35.256462  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt: {Name:mkbcc95d5a0333f713231100779eb1bdd30c3493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.256630  216380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key ...
	I1030 23:02:35.256641  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key: {Name:mkbde4ee7ec948def3189514ad378b14d0c2f1f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.256746  216380 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1030 23:02:35.458692  216380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt ...
	I1030 23:02:35.458721  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt: {Name:mk1fdcc50b06b135548aa41b9ec3e7c21bfe72a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.458869  216380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key ...
	I1030 23:02:35.458879  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key: {Name:mk3a557499af32e91731f24769a3012a6e007d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.458988  216380 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.key
	I1030 23:02:35.459006  216380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt with IP's: []
	I1030 23:02:35.736652  216380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt ...
	I1030 23:02:35.736691  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: {Name:mk17eb11e3776aadd885987cf75a9436127f9fd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.736910  216380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.key ...
	I1030 23:02:35.736929  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.key: {Name:mk3ca8d4449c4e4b0cbe69c5aac73ab5e72cd7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.737065  216380 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.key.ee96354a
	I1030 23:02:35.737089  216380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.crt.ee96354a with IP's: [192.168.39.172 10.96.0.1 127.0.0.1 10.0.0.1]
	I1030 23:02:35.938453  216380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.crt.ee96354a ...
	I1030 23:02:35.938489  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.crt.ee96354a: {Name:mke45f9986e130828ea3155bd30c1f9ff58fac38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.938671  216380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.key.ee96354a ...
	I1030 23:02:35.938689  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.key.ee96354a: {Name:mkab2d039b173340dd6aee99494d1050f80b75d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:35.938797  216380 certs.go:337] copying /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.crt.ee96354a -> /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.crt
	I1030 23:02:35.938935  216380 certs.go:341] copying /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.key.ee96354a -> /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.key
	I1030 23:02:35.939010  216380 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/proxy-client.key
	I1030 23:02:35.939036  216380 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/proxy-client.crt with IP's: []
	I1030 23:02:36.124778  216380 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/proxy-client.crt ...
	I1030 23:02:36.124812  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/proxy-client.crt: {Name:mk07d74d20299db3c2ed3dc9db29d9dbf7520850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:36.125001  216380 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/proxy-client.key ...
	I1030 23:02:36.125020  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/proxy-client.key: {Name:mked1b00b03d042ae7ccb709a0eecd99e86fdb5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:02:36.125221  216380 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 23:02:36.125269  216380 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1030 23:02:36.125309  216380 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1030 23:02:36.125344  216380 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1030 23:02:36.126127  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1030 23:02:36.149230  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 23:02:36.170185  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 23:02:36.191702  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 23:02:36.213016  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 23:02:36.234057  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 23:02:36.255200  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 23:02:36.276509  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1030 23:02:36.297596  216380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 23:02:36.318832  216380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1030 23:02:36.333785  216380 ssh_runner.go:195] Run: openssl version
	I1030 23:02:36.338829  216380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 23:02:36.348717  216380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:02:36.353482  216380 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:02:36.353525  216380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:02:36.358691  216380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 23:02:36.368803  216380 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1030 23:02:36.372616  216380 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:02:36.372672  216380 kubeadm.go:404] StartCluster: {Name:addons-780757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.3 ClusterName:addons-780757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:02:36.372787  216380 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 23:02:36.372826  216380 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 23:02:36.413073  216380 cri.go:89] found id: ""
	I1030 23:02:36.413175  216380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 23:02:36.422658  216380 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 23:02:36.431809  216380 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 23:02:36.441354  216380 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 23:02:36.441400  216380 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 23:02:36.490468  216380 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1030 23:02:36.490546  216380 kubeadm.go:322] [preflight] Running pre-flight checks
	I1030 23:02:36.629669  216380 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 23:02:36.629808  216380 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 23:02:36.629891  216380 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 23:02:36.864687  216380 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 23:02:36.973657  216380 out.go:204]   - Generating certificates and keys ...
	I1030 23:02:36.973817  216380 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1030 23:02:36.973925  216380 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1030 23:02:37.007258  216380 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 23:02:37.369754  216380 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1030 23:02:37.524055  216380 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1030 23:02:37.651913  216380 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1030 23:02:37.812855  216380 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1030 23:02:37.813398  216380 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-780757 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I1030 23:02:37.981840  216380 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1030 23:02:37.982033  216380 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-780757 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I1030 23:02:38.178687  216380 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 23:02:38.409489  216380 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 23:02:38.706501  216380 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1030 23:02:38.706587  216380 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 23:02:38.892847  216380 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 23:02:39.054826  216380 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 23:02:39.128654  216380 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 23:02:39.208895  216380 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 23:02:39.209684  216380 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 23:02:39.212032  216380 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 23:02:39.213762  216380 out.go:204]   - Booting up control plane ...
	I1030 23:02:39.213884  216380 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 23:02:39.213980  216380 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 23:02:39.214310  216380 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 23:02:39.235505  216380 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 23:02:39.238202  216380 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 23:02:39.238278  216380 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1030 23:02:39.365756  216380 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 23:02:46.865975  216380 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502367 seconds
	I1030 23:02:46.866148  216380 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 23:02:46.880505  216380 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 23:02:47.413207  216380 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 23:02:47.413459  216380 kubeadm.go:322] [mark-control-plane] Marking the node addons-780757 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 23:02:47.927608  216380 kubeadm.go:322] [bootstrap-token] Using token: ak5sfz.zo6gr4xhqeuuhadh
	I1030 23:02:47.929091  216380 out.go:204]   - Configuring RBAC rules ...
	I1030 23:02:47.929189  216380 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 23:02:47.934608  216380 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 23:02:47.947695  216380 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 23:02:47.951359  216380 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 23:02:47.954880  216380 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 23:02:47.960440  216380 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 23:02:47.976774  216380 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 23:02:48.254953  216380 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1030 23:02:48.347518  216380 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1030 23:02:48.348658  216380 kubeadm.go:322] 
	I1030 23:02:48.348747  216380 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1030 23:02:48.348768  216380 kubeadm.go:322] 
	I1030 23:02:48.348828  216380 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1030 23:02:48.348836  216380 kubeadm.go:322] 
	I1030 23:02:48.348894  216380 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1030 23:02:48.349001  216380 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 23:02:48.349105  216380 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 23:02:48.349115  216380 kubeadm.go:322] 
	I1030 23:02:48.349191  216380 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1030 23:02:48.349200  216380 kubeadm.go:322] 
	I1030 23:02:48.349287  216380 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 23:02:48.349307  216380 kubeadm.go:322] 
	I1030 23:02:48.349379  216380 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1030 23:02:48.349472  216380 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 23:02:48.349568  216380 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 23:02:48.349593  216380 kubeadm.go:322] 
	I1030 23:02:48.349705  216380 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 23:02:48.349803  216380 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1030 23:02:48.349816  216380 kubeadm.go:322] 
	I1030 23:02:48.349926  216380 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ak5sfz.zo6gr4xhqeuuhadh \
	I1030 23:02:48.350044  216380 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1030 23:02:48.350076  216380 kubeadm.go:322] 	--control-plane 
	I1030 23:02:48.350086  216380 kubeadm.go:322] 
	I1030 23:02:48.350184  216380 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1030 23:02:48.350197  216380 kubeadm.go:322] 
	I1030 23:02:48.350293  216380 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ak5sfz.zo6gr4xhqeuuhadh \
	I1030 23:02:48.350418  216380 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1030 23:02:48.350740  216380 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 23:02:48.350787  216380 cni.go:84] Creating CNI manager for ""
	I1030 23:02:48.350805  216380 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 23:02:48.352717  216380 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 23:02:48.354001  216380 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 23:02:48.376108  216380 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1030 23:02:48.398582  216380 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 23:02:48.398682  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:48.398726  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=addons-780757 minikube.k8s.io/updated_at=2023_10_30T23_02_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:48.461098  216380 ops.go:34] apiserver oom_adj: -16
	I1030 23:02:48.666775  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:48.775961  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:49.367672  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:49.868442  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:50.367594  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:50.867843  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:51.367921  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:51.868385  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:52.368329  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:52.868411  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:53.367800  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:53.867667  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:54.368066  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:54.867709  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:55.367634  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:55.867731  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:56.367585  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:56.867700  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:57.367451  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:57.867711  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:58.367987  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:58.867604  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:59.367502  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:02:59.868446  216380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:03:00.003556  216380 kubeadm.go:1081] duration metric: took 11.604944021s to wait for elevateKubeSystemPrivileges.
	I1030 23:03:00.003600  216380 kubeadm.go:406] StartCluster complete in 23.63093397s
	I1030 23:03:00.003627  216380 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:03:00.003788  216380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:03:00.004173  216380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:03:00.004387  216380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 23:03:00.004528  216380 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1030 23:03:00.004636  216380 addons.go:69] Setting volumesnapshots=true in profile "addons-780757"
	I1030 23:03:00.004657  216380 config.go:182] Loaded profile config "addons-780757": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:03:00.004694  216380 addons.go:231] Setting addon volumesnapshots=true in "addons-780757"
	I1030 23:03:00.004699  216380 addons.go:69] Setting ingress-dns=true in profile "addons-780757"
	I1030 23:03:00.004718  216380 addons.go:69] Setting inspektor-gadget=true in profile "addons-780757"
	I1030 23:03:00.004728  216380 addons.go:231] Setting addon ingress-dns=true in "addons-780757"
	I1030 23:03:00.004730  216380 addons.go:69] Setting helm-tiller=true in profile "addons-780757"
	I1030 23:03:00.004747  216380 addons.go:69] Setting gcp-auth=true in profile "addons-780757"
	I1030 23:03:00.004750  216380 addons.go:69] Setting storage-provisioner=true in profile "addons-780757"
	I1030 23:03:00.004766  216380 addons.go:231] Setting addon storage-provisioner=true in "addons-780757"
	I1030 23:03:00.004785  216380 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-780757"
	I1030 23:03:00.004772  216380 addons.go:69] Setting registry=true in profile "addons-780757"
	I1030 23:03:00.004803  216380 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-780757"
	I1030 23:03:00.004810  216380 addons.go:231] Setting addon registry=true in "addons-780757"
	I1030 23:03:00.004812  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.004716  216380 addons.go:69] Setting default-storageclass=true in profile "addons-780757"
	I1030 23:03:00.004847  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.004857  216380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-780757"
	I1030 23:03:00.004768  216380 addons.go:231] Setting addon helm-tiller=true in "addons-780757"
	I1030 23:03:00.004998  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.004740  216380 addons.go:69] Setting metrics-server=true in profile "addons-780757"
	I1030 23:03:00.005040  216380 addons.go:231] Setting addon metrics-server=true in "addons-780757"
	I1030 23:03:00.005094  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.004736  216380 addons.go:231] Setting addon inspektor-gadget=true in "addons-780757"
	I1030 23:03:00.005308  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.005345  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.005364  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.005369  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.004767  216380 mustload.go:65] Loading cluster: addons-780757
	I1030 23:03:00.005392  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.005466  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.004704  216380 addons.go:69] Setting cloud-spanner=true in profile "addons-780757"
	I1030 23:03:00.005485  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.005497  216380 addons.go:231] Setting addon cloud-spanner=true in "addons-780757"
	I1030 23:03:00.005535  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.005606  216380 addons.go:69] Setting ingress=true in profile "addons-780757"
	I1030 23:03:00.005649  216380 addons.go:231] Setting addon ingress=true in "addons-780757"
	I1030 23:03:00.005703  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.004788  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.005797  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.005822  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.005881  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.005909  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.004713  216380 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-780757"
	I1030 23:03:00.005998  216380 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-780757"
	I1030 23:03:00.006051  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.005649  216380 config.go:182] Loaded profile config "addons-780757": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:03:00.006137  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.006074  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.005290  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.006516  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.006520  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.006559  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.006576  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.006610  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.005391  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.006836  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.005290  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.006912  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.004790  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.006098  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.007203  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.004737  216380 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-780757"
	I1030 23:03:00.007342  216380 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-780757"
	I1030 23:03:00.007394  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.024380  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I1030 23:03:00.024458  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41015
	I1030 23:03:00.025263  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.025293  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46283
	I1030 23:03:00.025363  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.025915  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.025936  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.025965  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.026034  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.026050  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.026355  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.026372  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.026476  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.026491  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.026492  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1030 23:03:00.026554  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.027442  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.027522  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.027549  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.028088  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.028117  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.028178  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.028196  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.028265  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I1030 23:03:00.028656  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.028957  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.029221  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.029252  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.029480  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.029502  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.030615  216380 addons.go:231] Setting addon default-storageclass=true in "addons-780757"
	I1030 23:03:00.030661  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.030779  216380 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-780757"
	I1030 23:03:00.030816  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.031050  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.031082  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.031084  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.031105  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.031554  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.032082  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.032104  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I1030 23:03:00.032108  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.032454  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.032857  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.032879  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.033187  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.036694  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.036728  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.036734  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.036762  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.037034  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.037071  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.049504  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45533
	I1030 23:03:00.050088  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.050637  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.050655  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.051045  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.051604  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.051645  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.055130  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I1030 23:03:00.055659  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.056244  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.056263  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.056657  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.056860  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.059298  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.061909  216380 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1030 23:03:00.063341  216380 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1030 23:03:00.061689  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I1030 23:03:00.063368  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1030 23:03:00.063399  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.062451  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36323
	I1030 23:03:00.063659  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I1030 23:03:00.063801  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.064310  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.064403  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.064415  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.064434  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.064852  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.065028  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.065050  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.065514  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.065559  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.066180  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.066378  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.067084  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.067775  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.067822  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.067848  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.067980  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.068184  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.068444  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.068468  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.068536  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:00.068559  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.068919  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.068970  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.069135  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.069734  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.069780  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.071511  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
	I1030 23:03:00.072022  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.072535  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.072552  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.072901  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.073104  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.073895  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I1030 23:03:00.074286  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.074748  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.074760  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.075187  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.076068  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.076111  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.076324  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.078150  216380 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1030 23:03:00.079410  216380 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1030 23:03:00.080682  216380 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1030 23:03:00.080789  216380 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-780757" context rescaled to 1 replicas
	I1030 23:03:00.081940  216380 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 23:03:00.083276  216380 out.go:177] * Verifying Kubernetes components...
	I1030 23:03:00.082063  216380 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1030 23:03:00.084625  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1030 23:03:00.084654  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.084738  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44095
	I1030 23:03:00.084764  216380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:03:00.085176  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.085680  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.085698  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.086080  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.086627  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.086662  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.087988  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37097
	I1030 23:03:00.088525  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.088978  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.089064  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.089327  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.089469  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.089987  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.090009  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.090443  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.091051  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.091089  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.091514  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.091752  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.091926  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.093537  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I1030 23:03:00.093993  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.094512  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.094529  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.094872  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.095051  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.098737  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41483
	I1030 23:03:00.099286  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.099788  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.099805  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.100228  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.100774  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.100813  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.101345  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I1030 23:03:00.101994  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.102627  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.102646  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.103026  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.103621  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.103677  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.103872  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I1030 23:03:00.105448  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I1030 23:03:00.105841  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.106094  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I1030 23:03:00.106437  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.106472  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.106564  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.106624  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.106841  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.107080  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.107302  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.107317  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.107439  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.107452  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.107529  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I1030 23:03:00.107686  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.107849  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.107909  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.108462  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.108689  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35779
	I1030 23:03:00.108831  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:00.108870  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:00.109046  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.109069  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.109192  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.109456  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.109683  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.109709  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.109776  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.109841  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.111532  216380 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1030 23:03:00.112233  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.112237  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1030 23:03:00.112955  216380 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1030 23:03:00.112968  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1030 23:03:00.112985  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.112771  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.113233  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.113240  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.114640  216380 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 23:03:00.114175  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.116131  216380 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1030 23:03:00.117380  216380 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1030 23:03:00.116137  216380 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 23:03:00.116205  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.116551  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.117407  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1030 23:03:00.117431  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.117434  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.117462  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.116670  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.117482  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.117506  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 23:03:00.117523  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.116800  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I1030 23:03:00.117246  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.118252  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.119697  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1030 23:03:00.118508  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.118607  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.119191  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.121413  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.122879  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1030 23:03:00.121745  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.122922  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.122953  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.121783  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.123177  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I1030 23:03:00.122130  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.123503  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.123517  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.122292  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.122451  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.123559  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.122624  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.122692  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.123754  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.123776  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.124509  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.124735  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.125282  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.125121  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1030 23:03:00.125317  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.125410  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.125437  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.125477  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.125539  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.125677  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.127906  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1030 23:03:00.127977  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.129260  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1030 23:03:00.128175  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.128368  216380 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 23:03:00.128434  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.130403  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I1030 23:03:00.133170  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1030 23:03:00.131141  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 23:03:00.132084  216380 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1030 23:03:00.133056  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.133444  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.134395  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.135635  216380 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1030 23:03:00.135653  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1030 23:03:00.135667  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.136835  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1030 23:03:00.135243  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I1030 23:03:00.135763  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.137677  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.138258  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.138542  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.139041  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.139315  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.139320  216380 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1030 23:03:00.139362  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.139673  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.140583  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1030 23:03:00.141853  216380 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1030 23:03:00.141869  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1030 23:03:00.143067  216380 out.go:177]   - Using image docker.io/registry:2.8.3
	I1030 23:03:00.144234  216380 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1030 23:03:00.144248  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1030 23:03:00.144261  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.143091  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
	I1030 23:03:00.140729  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.144381  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.141233  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.141246  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I1030 23:03:00.141256  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.141274  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.141321  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.144509  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.141886  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.140704  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.144699  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.144803  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.144804  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.144956  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.144996  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.145158  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.145661  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.146810  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.147029  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.148662  216380 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1030 23:03:00.149975  216380 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1030 23:03:00.148748  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.150004  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.150022  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.150055  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.148371  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.148407  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.150076  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.147509  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:00.149152  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.149455  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.150193  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.150228  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.149988  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1030 23:03:00.150250  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.150257  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.151501  216380 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1030 23:03:00.150322  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.150321  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.150703  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.152010  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:00.152838  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:00.154045  216380 out.go:177]   - Using image docker.io/busybox:stable
	I1030 23:03:00.155418  216380 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1030 23:03:00.155431  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1030 23:03:00.153058  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.153082  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.153087  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.153247  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.155662  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.155685  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.153834  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.154132  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:00.155444  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.155855  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.156018  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.156091  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.156134  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.156628  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:00.158103  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.159791  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.159808  216380 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1030 23:03:00.161085  216380 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1030 23:03:00.161104  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1030 23:03:00.161121  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.159052  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:00.159255  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.161207  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.161232  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.159980  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.162667  216380 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1030 23:03:00.161555  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.163947  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.163979  216380 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1030 23:03:00.163994  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1030 23:03:00.164011  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:00.164054  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.164674  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.164699  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.164737  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.164918  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.165149  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.165425  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.167283  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.167599  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:00.167632  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:00.167845  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:00.168029  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:00.168209  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:00.168383  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:00.338524  216380 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1030 23:03:00.338550  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1030 23:03:00.388649  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1030 23:03:00.403582  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1030 23:03:00.427197  216380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 23:03:00.427918  216380 node_ready.go:35] waiting up to 6m0s for node "addons-780757" to be "Ready" ...
	I1030 23:03:00.431411  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1030 23:03:00.445850  216380 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1030 23:03:00.445876  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1030 23:03:00.449525  216380 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1030 23:03:00.449544  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1030 23:03:00.474265  216380 node_ready.go:49] node "addons-780757" has status "Ready":"True"
	I1030 23:03:00.474294  216380 node_ready.go:38] duration metric: took 46.339938ms waiting for node "addons-780757" to be "Ready" ...
	I1030 23:03:00.474307  216380 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:03:00.493437  216380 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-780757" in "kube-system" namespace to be "Ready" ...
	I1030 23:03:00.507867  216380 pod_ready.go:92] pod "etcd-addons-780757" in "kube-system" namespace has status "Ready":"True"
	I1030 23:03:00.507903  216380 pod_ready.go:81] duration metric: took 14.435556ms waiting for pod "etcd-addons-780757" in "kube-system" namespace to be "Ready" ...
	I1030 23:03:00.507918  216380 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-780757" in "kube-system" namespace to be "Ready" ...
	I1030 23:03:00.527144  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 23:03:00.545392  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1030 23:03:00.561388  216380 pod_ready.go:92] pod "kube-apiserver-addons-780757" in "kube-system" namespace has status "Ready":"True"
	I1030 23:03:00.561409  216380 pod_ready.go:81] duration metric: took 53.48365ms waiting for pod "kube-apiserver-addons-780757" in "kube-system" namespace to be "Ready" ...
	I1030 23:03:00.561418  216380 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-780757" in "kube-system" namespace to be "Ready" ...
	I1030 23:03:00.575179  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1030 23:03:00.629317  216380 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1030 23:03:00.629344  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1030 23:03:00.635938  216380 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1030 23:03:00.635962  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1030 23:03:00.656520  216380 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 23:03:00.656548  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1030 23:03:00.665468  216380 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1030 23:03:00.665492  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1030 23:03:00.666978  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 23:03:00.690577  216380 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1030 23:03:00.690612  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1030 23:03:00.731351  216380 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1030 23:03:00.731388  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1030 23:03:00.859297  216380 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1030 23:03:00.859325  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1030 23:03:00.894408  216380 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1030 23:03:00.894433  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1030 23:03:00.897823  216380 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1030 23:03:00.897847  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1030 23:03:00.921913  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1030 23:03:00.923785  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1030 23:03:00.926020  216380 pod_ready.go:92] pod "kube-controller-manager-addons-780757" in "kube-system" namespace has status "Ready":"True"
	I1030 23:03:00.926044  216380 pod_ready.go:81] duration metric: took 364.618292ms waiting for pod "kube-controller-manager-addons-780757" in "kube-system" namespace to be "Ready" ...
	I1030 23:03:00.926060  216380 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-780757" in "kube-system" namespace to be "Ready" ...
	I1030 23:03:00.941750  216380 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1030 23:03:00.941790  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1030 23:03:01.003969  216380 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1030 23:03:01.004003  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1030 23:03:01.051599  216380 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1030 23:03:01.051628  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1030 23:03:01.089809  216380 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1030 23:03:01.089842  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1030 23:03:01.158744  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1030 23:03:01.194273  216380 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1030 23:03:01.194299  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1030 23:03:01.227150  216380 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1030 23:03:01.227182  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1030 23:03:01.233506  216380 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1030 23:03:01.233533  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1030 23:03:01.284913  216380 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 23:03:01.284961  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1030 23:03:01.334497  216380 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1030 23:03:01.334528  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1030 23:03:01.346083  216380 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1030 23:03:01.346114  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1030 23:03:01.368973  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 23:03:01.426616  216380 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1030 23:03:01.426646  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1030 23:03:01.445537  216380 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1030 23:03:01.445564  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1030 23:03:01.478568  216380 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1030 23:03:01.478606  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1030 23:03:01.509474  216380 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1030 23:03:01.509498  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1030 23:03:01.558765  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1030 23:03:01.560121  216380 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1030 23:03:01.560142  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1030 23:03:01.598861  216380 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1030 23:03:01.598886  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1030 23:03:01.599192  216380 pod_ready.go:92] pod "kube-scheduler-addons-780757" in "kube-system" namespace has status "Ready":"True"
	I1030 23:03:01.599220  216380 pod_ready.go:81] duration metric: took 673.151521ms waiting for pod "kube-scheduler-addons-780757" in "kube-system" namespace to be "Ready" ...
	I1030 23:03:01.599234  216380 pod_ready.go:38] duration metric: took 1.124913538s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:03:01.599258  216380 api_server.go:52] waiting for apiserver process to appear ...
	I1030 23:03:01.599320  216380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:03:01.683523  216380 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1030 23:03:01.683548  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1030 23:03:01.731682  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1030 23:03:06.881881  216380 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1030 23:03:06.881923  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:06.885422  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:06.885804  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:06.885839  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:06.886023  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:06.886240  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:06.886386  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:06.886530  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:07.056506  216380 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1030 23:03:07.085825  216380 addons.go:231] Setting addon gcp-auth=true in "addons-780757"
	I1030 23:03:07.085914  216380 host.go:66] Checking if "addons-780757" exists ...
	I1030 23:03:07.086389  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:07.086436  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:07.116952  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I1030 23:03:07.117444  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:07.118008  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:07.118039  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:07.118432  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:07.119050  216380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:03:07.119086  216380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:03:07.134960  216380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39763
	I1030 23:03:07.135407  216380 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:03:07.135860  216380 main.go:141] libmachine: Using API Version  1
	I1030 23:03:07.135884  216380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:03:07.136249  216380 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:03:07.136484  216380 main.go:141] libmachine: (addons-780757) Calling .GetState
	I1030 23:03:07.138297  216380 main.go:141] libmachine: (addons-780757) Calling .DriverName
	I1030 23:03:07.138567  216380 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1030 23:03:07.138600  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHHostname
	I1030 23:03:07.141440  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:07.141859  216380 main.go:141] libmachine: (addons-780757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:88:e5", ip: ""} in network mk-addons-780757: {Iface:virbr1 ExpiryTime:2023-10-31 00:02:20 +0000 UTC Type:0 Mac:52:54:00:29:88:e5 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:addons-780757 Clientid:01:52:54:00:29:88:e5}
	I1030 23:03:07.141895  216380 main.go:141] libmachine: (addons-780757) DBG | domain addons-780757 has defined IP address 192.168.39.172 and MAC address 52:54:00:29:88:e5 in network mk-addons-780757
	I1030 23:03:07.142057  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHPort
	I1030 23:03:07.142241  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHKeyPath
	I1030 23:03:07.142411  216380 main.go:141] libmachine: (addons-780757) Calling .GetSSHUsername
	I1030 23:03:07.142594  216380 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/addons-780757/id_rsa Username:docker}
	I1030 23:03:08.800539  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.411855476s)
	I1030 23:03:08.800599  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.800613  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.800629  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.397015672s)
	I1030 23:03:08.800665  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.800681  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.800717  216380 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.373493175s)
	I1030 23:03:08.800745  216380 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 23:03:08.800813  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.369370113s)
	I1030 23:03:08.800851  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.800868  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.800898  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.273725843s)
	I1030 23:03:08.800962  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.800979  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.800996  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801014  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801018  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.801032  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801120  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.134122776s)
	I1030 23:03:08.801125  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801144  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801155  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801281  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.879329536s)
	I1030 23:03:08.801307  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801317  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801331  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.801031  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.255611764s)
	I1030 23:03:08.801372  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.801381  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.801391  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801400  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801399  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801413  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801440  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.877616467s)
	I1030 23:03:08.801460  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801469  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801521  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.801532  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.801541  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801550  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801558  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.642786426s)
	I1030 23:03:08.801578  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801587  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.801602  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.801681  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.801726  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.801739  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.801738  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.432724637s)
	W1030 23:03:08.801779  216380 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1030 23:03:08.801818  216380 retry.go:31] will retry after 166.783577ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1030 23:03:08.801905  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.243097423s)
	I1030 23:03:08.801927  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.801937  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.802020  216380 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.202680926s)
	I1030 23:03:08.802050  216380 api_server.go:72] duration metric: took 8.720060356s to wait for apiserver process to appear ...
	I1030 23:03:08.802058  216380 api_server.go:88] waiting for apiserver healthz status ...
	I1030 23:03:08.802074  216380 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1030 23:03:08.802555  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.802568  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.801093  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.225888023s)
	I1030 23:03:08.802794  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.802807  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.802874  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.802891  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.802904  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.802921  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.802929  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.802940  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.802944  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.802948  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.802954  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.803884  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.803911  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.803942  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.803962  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.803972  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.803982  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.804048  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.804075  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.804083  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.804092  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.804109  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.804149  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.804176  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.804184  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.804193  216380 addons.go:467] Verifying addon ingress=true in "addons-780757"
	I1030 23:03:08.805850  216380 out.go:177] * Verifying ingress addon...
	I1030 23:03:08.804845  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.807437  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.807455  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.807470  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.804885  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.804913  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.807572  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.807582  216380 addons.go:467] Verifying addon registry=true in "addons-780757"
	I1030 23:03:08.808856  216380 out.go:177] * Verifying registry addon...
	I1030 23:03:08.808034  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.804967  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.810635  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.804995  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.810662  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.810680  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.810694  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.805017  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.810746  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.810763  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.805032  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.810777  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.805048  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.810796  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.810812  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.810821  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.805063  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.805150  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.810930  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.810941  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.810949  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.805183  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.808071  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.811032  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.811045  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.811088  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.811098  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.811168  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.811201  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.804931  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:08.811220  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.811239  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.808074  216380 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1030 23:03:08.811252  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.811265  216380 addons.go:467] Verifying addon metrics-server=true in "addons-780757"
	I1030 23:03:08.811499  216380 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1030 23:03:08.811605  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.811621  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.811818  216380 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1030 23:03:08.817438  216380 api_server.go:141] control plane version: v1.28.3
	I1030 23:03:08.817464  216380 api_server.go:131] duration metric: took 15.397901ms to wait for apiserver health ...
	I1030 23:03:08.817475  216380 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 23:03:08.841677  216380 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1030 23:03:08.841702  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:08.841831  216380 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1030 23:03:08.841849  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:08.858281  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:08.867792  216380 system_pods.go:59] 15 kube-system pods found
	I1030 23:03:08.867820  216380 system_pods.go:61] "coredns-5dd5756b68-vnckz" [1dd885ee-b288-4507-900f-ebae0145f76c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:03:08.867827  216380 system_pods.go:61] "etcd-addons-780757" [32eb9ec4-3cfd-4ee2-a6e6-9dcb263804b3] Running
	I1030 23:03:08.867832  216380 system_pods.go:61] "kube-apiserver-addons-780757" [bcf74483-d701-45e5-88f5-b185f5cc013d] Running
	I1030 23:03:08.867836  216380 system_pods.go:61] "kube-controller-manager-addons-780757" [88856819-625c-49f3-96f3-7029b3c3d18b] Running
	I1030 23:03:08.867842  216380 system_pods.go:61] "kube-ingress-dns-minikube" [7d8b030a-8d89-47c3-87b3-fa6b3676c8ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1030 23:03:08.867851  216380 system_pods.go:61] "kube-proxy-2s8wh" [b9f4faf7-a1f8-4050-ac2e-deb5767caa4d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 23:03:08.867857  216380 system_pods.go:61] "kube-scheduler-addons-780757" [268eccf9-0286-4647-9d83-cd8ed61af64f] Running
	I1030 23:03:08.867865  216380 system_pods.go:61] "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 23:03:08.867883  216380 system_pods.go:61] "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1030 23:03:08.867892  216380 system_pods.go:61] "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 23:03:08.867898  216380 system_pods.go:61] "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 23:03:08.867907  216380 system_pods.go:61] "snapshot-controller-58dbcc7b99-699z7" [94d02eb3-de06-4a38-838a-901acddc9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:08.867915  216380 system_pods.go:61] "snapshot-controller-58dbcc7b99-q8jtn" [8516a697-a645-4cc0-801b-784b5c9301ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:08.867923  216380 system_pods.go:61] "storage-provisioner" [a24f119a-67f4-40e1-9c52-12f734c702b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:03:08.867929  216380 system_pods.go:61] "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1030 23:03:08.867944  216380 system_pods.go:74] duration metric: took 50.458198ms to wait for pod list to return data ...
	I1030 23:03:08.867956  216380 default_sa.go:34] waiting for default service account to be created ...
	I1030 23:03:08.871841  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.871866  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.872158  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.872178  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.872175  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	W1030 23:03:08.872291  216380 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1030 23:03:08.880441  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:08.882874  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:08.882898  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:08.883197  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:08.883214  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:08.884058  216380 default_sa.go:45] found service account: "default"
	I1030 23:03:08.884078  216380 default_sa.go:55] duration metric: took 16.115334ms for default service account to be created ...
	I1030 23:03:08.884087  216380 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 23:03:08.893815  216380 system_pods.go:86] 15 kube-system pods found
	I1030 23:03:08.893842  216380 system_pods.go:89] "coredns-5dd5756b68-vnckz" [1dd885ee-b288-4507-900f-ebae0145f76c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:03:08.893851  216380 system_pods.go:89] "etcd-addons-780757" [32eb9ec4-3cfd-4ee2-a6e6-9dcb263804b3] Running
	I1030 23:03:08.893856  216380 system_pods.go:89] "kube-apiserver-addons-780757" [bcf74483-d701-45e5-88f5-b185f5cc013d] Running
	I1030 23:03:08.893863  216380 system_pods.go:89] "kube-controller-manager-addons-780757" [88856819-625c-49f3-96f3-7029b3c3d18b] Running
	I1030 23:03:08.893872  216380 system_pods.go:89] "kube-ingress-dns-minikube" [7d8b030a-8d89-47c3-87b3-fa6b3676c8ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1030 23:03:08.893886  216380 system_pods.go:89] "kube-proxy-2s8wh" [b9f4faf7-a1f8-4050-ac2e-deb5767caa4d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 23:03:08.893894  216380 system_pods.go:89] "kube-scheduler-addons-780757" [268eccf9-0286-4647-9d83-cd8ed61af64f] Running
	I1030 23:03:08.893905  216380 system_pods.go:89] "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 23:03:08.893915  216380 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1030 23:03:08.893923  216380 system_pods.go:89] "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 23:03:08.893929  216380 system_pods.go:89] "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 23:03:08.893938  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-699z7" [94d02eb3-de06-4a38-838a-901acddc9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:08.893947  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-q8jtn" [8516a697-a645-4cc0-801b-784b5c9301ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:08.893955  216380 system_pods.go:89] "storage-provisioner" [a24f119a-67f4-40e1-9c52-12f734c702b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:03:08.893968  216380 system_pods.go:89] "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1030 23:03:08.893990  216380 retry.go:31] will retry after 266.887156ms: missing components: kube-proxy
	I1030 23:03:08.969580  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1030 23:03:09.186934  216380 system_pods.go:86] 15 kube-system pods found
	I1030 23:03:09.186970  216380 system_pods.go:89] "coredns-5dd5756b68-vnckz" [1dd885ee-b288-4507-900f-ebae0145f76c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:03:09.186977  216380 system_pods.go:89] "etcd-addons-780757" [32eb9ec4-3cfd-4ee2-a6e6-9dcb263804b3] Running
	I1030 23:03:09.186982  216380 system_pods.go:89] "kube-apiserver-addons-780757" [bcf74483-d701-45e5-88f5-b185f5cc013d] Running
	I1030 23:03:09.186986  216380 system_pods.go:89] "kube-controller-manager-addons-780757" [88856819-625c-49f3-96f3-7029b3c3d18b] Running
	I1030 23:03:09.186995  216380 system_pods.go:89] "kube-ingress-dns-minikube" [7d8b030a-8d89-47c3-87b3-fa6b3676c8ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1030 23:03:09.187001  216380 system_pods.go:89] "kube-proxy-2s8wh" [b9f4faf7-a1f8-4050-ac2e-deb5767caa4d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 23:03:09.187006  216380 system_pods.go:89] "kube-scheduler-addons-780757" [268eccf9-0286-4647-9d83-cd8ed61af64f] Running
	I1030 23:03:09.187011  216380 system_pods.go:89] "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 23:03:09.187017  216380 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1030 23:03:09.187027  216380 system_pods.go:89] "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 23:03:09.187034  216380 system_pods.go:89] "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 23:03:09.187040  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-699z7" [94d02eb3-de06-4a38-838a-901acddc9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:09.187047  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-q8jtn" [8516a697-a645-4cc0-801b-784b5c9301ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:09.187053  216380 system_pods.go:89] "storage-provisioner" [a24f119a-67f4-40e1-9c52-12f734c702b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:03:09.187061  216380 system_pods.go:89] "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1030 23:03:09.187077  216380 retry.go:31] will retry after 294.695474ms: missing components: kube-proxy
	I1030 23:03:09.377161  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:09.400995  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:09.538797  216380 system_pods.go:86] 16 kube-system pods found
	I1030 23:03:09.538837  216380 system_pods.go:89] "coredns-5dd5756b68-vnckz" [1dd885ee-b288-4507-900f-ebae0145f76c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:03:09.538846  216380 system_pods.go:89] "csi-hostpath-attacher-0" [5e89bfbd-f218-4afc-80e7-ca412afa1b3f] Pending
	I1030 23:03:09.538850  216380 system_pods.go:89] "etcd-addons-780757" [32eb9ec4-3cfd-4ee2-a6e6-9dcb263804b3] Running
	I1030 23:03:09.538855  216380 system_pods.go:89] "kube-apiserver-addons-780757" [bcf74483-d701-45e5-88f5-b185f5cc013d] Running
	I1030 23:03:09.538860  216380 system_pods.go:89] "kube-controller-manager-addons-780757" [88856819-625c-49f3-96f3-7029b3c3d18b] Running
	I1030 23:03:09.538866  216380 system_pods.go:89] "kube-ingress-dns-minikube" [7d8b030a-8d89-47c3-87b3-fa6b3676c8ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1030 23:03:09.538875  216380 system_pods.go:89] "kube-proxy-2s8wh" [b9f4faf7-a1f8-4050-ac2e-deb5767caa4d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 23:03:09.538882  216380 system_pods.go:89] "kube-scheduler-addons-780757" [268eccf9-0286-4647-9d83-cd8ed61af64f] Running
	I1030 23:03:09.538891  216380 system_pods.go:89] "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 23:03:09.538900  216380 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1030 23:03:09.538909  216380 system_pods.go:89] "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 23:03:09.538915  216380 system_pods.go:89] "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 23:03:09.538924  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-699z7" [94d02eb3-de06-4a38-838a-901acddc9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:09.538940  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-q8jtn" [8516a697-a645-4cc0-801b-784b5c9301ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:09.538958  216380 system_pods.go:89] "storage-provisioner" [a24f119a-67f4-40e1-9c52-12f734c702b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:03:09.538964  216380 system_pods.go:89] "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1030 23:03:09.538983  216380 retry.go:31] will retry after 430.667485ms: missing components: kube-proxy
	I1030 23:03:09.900291  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:09.906278  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:09.970585  216380 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.831986433s)
	I1030 23:03:09.970608  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.238875826s)
	I1030 23:03:09.970660  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:09.970680  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:09.972349  216380 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1030 23:03:09.971083  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:09.973693  216380 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1030 23:03:09.971117  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:09.972393  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:09.973733  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:09.975189  216380 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1030 23:03:09.973744  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:09.975212  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1030 23:03:09.975558  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:09.975580  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:09.975592  216380 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-780757"
	I1030 23:03:09.976890  216380 out.go:177] * Verifying csi-hostpath-driver addon...
	I1030 23:03:09.978878  216380 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1030 23:03:09.991452  216380 system_pods.go:86] 18 kube-system pods found
	I1030 23:03:09.991485  216380 system_pods.go:89] "coredns-5dd5756b68-vnckz" [1dd885ee-b288-4507-900f-ebae0145f76c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:03:09.991498  216380 system_pods.go:89] "csi-hostpath-attacher-0" [5e89bfbd-f218-4afc-80e7-ca412afa1b3f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1030 23:03:09.991508  216380 system_pods.go:89] "csi-hostpath-resizer-0" [a2b68e02-76fb-44be-a68d-5f5408bab6ac] Pending
	I1030 23:03:09.991517  216380 system_pods.go:89] "csi-hostpathplugin-5chqs" [d81809ea-7714-417d-b595-f4baba970354] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1030 23:03:09.991524  216380 system_pods.go:89] "etcd-addons-780757" [32eb9ec4-3cfd-4ee2-a6e6-9dcb263804b3] Running
	I1030 23:03:09.991540  216380 system_pods.go:89] "kube-apiserver-addons-780757" [bcf74483-d701-45e5-88f5-b185f5cc013d] Running
	I1030 23:03:09.991551  216380 system_pods.go:89] "kube-controller-manager-addons-780757" [88856819-625c-49f3-96f3-7029b3c3d18b] Running
	I1030 23:03:09.991562  216380 system_pods.go:89] "kube-ingress-dns-minikube" [7d8b030a-8d89-47c3-87b3-fa6b3676c8ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1030 23:03:09.991581  216380 system_pods.go:89] "kube-proxy-2s8wh" [b9f4faf7-a1f8-4050-ac2e-deb5767caa4d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 23:03:09.991588  216380 system_pods.go:89] "kube-scheduler-addons-780757" [268eccf9-0286-4647-9d83-cd8ed61af64f] Running
	I1030 23:03:09.991598  216380 system_pods.go:89] "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 23:03:09.991609  216380 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1030 23:03:09.991624  216380 system_pods.go:89] "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 23:03:09.991638  216380 system_pods.go:89] "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 23:03:09.991649  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-699z7" [94d02eb3-de06-4a38-838a-901acddc9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:09.991663  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-q8jtn" [8516a697-a645-4cc0-801b-784b5c9301ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:09.991676  216380 system_pods.go:89] "storage-provisioner" [a24f119a-67f4-40e1-9c52-12f734c702b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:03:09.991689  216380 system_pods.go:89] "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1030 23:03:09.991709  216380 retry.go:31] will retry after 409.303957ms: missing components: kube-proxy
	I1030 23:03:10.001019  216380 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1030 23:03:10.001035  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1030 23:03:10.019685  216380 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1030 23:03:10.019704  216380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1030 23:03:10.023031  216380 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1030 23:03:10.023050  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:10.048014  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:10.080515  216380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1030 23:03:10.370377  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:10.406733  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:10.429956  216380 system_pods.go:86] 18 kube-system pods found
	I1030 23:03:10.429987  216380 system_pods.go:89] "coredns-5dd5756b68-vnckz" [1dd885ee-b288-4507-900f-ebae0145f76c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:03:10.429994  216380 system_pods.go:89] "csi-hostpath-attacher-0" [5e89bfbd-f218-4afc-80e7-ca412afa1b3f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1030 23:03:10.430001  216380 system_pods.go:89] "csi-hostpath-resizer-0" [a2b68e02-76fb-44be-a68d-5f5408bab6ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1030 23:03:10.430007  216380 system_pods.go:89] "csi-hostpathplugin-5chqs" [d81809ea-7714-417d-b595-f4baba970354] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1030 23:03:10.430012  216380 system_pods.go:89] "etcd-addons-780757" [32eb9ec4-3cfd-4ee2-a6e6-9dcb263804b3] Running
	I1030 23:03:10.430016  216380 system_pods.go:89] "kube-apiserver-addons-780757" [bcf74483-d701-45e5-88f5-b185f5cc013d] Running
	I1030 23:03:10.430020  216380 system_pods.go:89] "kube-controller-manager-addons-780757" [88856819-625c-49f3-96f3-7029b3c3d18b] Running
	I1030 23:03:10.430026  216380 system_pods.go:89] "kube-ingress-dns-minikube" [7d8b030a-8d89-47c3-87b3-fa6b3676c8ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1030 23:03:10.430033  216380 system_pods.go:89] "kube-proxy-2s8wh" [b9f4faf7-a1f8-4050-ac2e-deb5767caa4d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 23:03:10.430038  216380 system_pods.go:89] "kube-scheduler-addons-780757" [268eccf9-0286-4647-9d83-cd8ed61af64f] Running
	I1030 23:03:10.430043  216380 system_pods.go:89] "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 23:03:10.430050  216380 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1030 23:03:10.430059  216380 system_pods.go:89] "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 23:03:10.430069  216380 system_pods.go:89] "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 23:03:10.430078  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-699z7" [94d02eb3-de06-4a38-838a-901acddc9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:10.430091  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-q8jtn" [8516a697-a645-4cc0-801b-784b5c9301ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:10.430103  216380 system_pods.go:89] "storage-provisioner" [a24f119a-67f4-40e1-9c52-12f734c702b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:03:10.430109  216380 system_pods.go:89] "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1030 23:03:10.430128  216380 retry.go:31] will retry after 618.172401ms: missing components: kube-proxy
	I1030 23:03:10.636013  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:10.863394  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:10.885306  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:11.101731  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:11.123641  216380 system_pods.go:86] 18 kube-system pods found
	I1030 23:03:11.123678  216380 system_pods.go:89] "coredns-5dd5756b68-vnckz" [1dd885ee-b288-4507-900f-ebae0145f76c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:03:11.123688  216380 system_pods.go:89] "csi-hostpath-attacher-0" [5e89bfbd-f218-4afc-80e7-ca412afa1b3f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1030 23:03:11.123695  216380 system_pods.go:89] "csi-hostpath-resizer-0" [a2b68e02-76fb-44be-a68d-5f5408bab6ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1030 23:03:11.123705  216380 system_pods.go:89] "csi-hostpathplugin-5chqs" [d81809ea-7714-417d-b595-f4baba970354] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1030 23:03:11.123710  216380 system_pods.go:89] "etcd-addons-780757" [32eb9ec4-3cfd-4ee2-a6e6-9dcb263804b3] Running
	I1030 23:03:11.123715  216380 system_pods.go:89] "kube-apiserver-addons-780757" [bcf74483-d701-45e5-88f5-b185f5cc013d] Running
	I1030 23:03:11.123719  216380 system_pods.go:89] "kube-controller-manager-addons-780757" [88856819-625c-49f3-96f3-7029b3c3d18b] Running
	I1030 23:03:11.123729  216380 system_pods.go:89] "kube-ingress-dns-minikube" [7d8b030a-8d89-47c3-87b3-fa6b3676c8ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1030 23:03:11.123735  216380 system_pods.go:89] "kube-proxy-2s8wh" [b9f4faf7-a1f8-4050-ac2e-deb5767caa4d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 23:03:11.123742  216380 system_pods.go:89] "kube-scheduler-addons-780757" [268eccf9-0286-4647-9d83-cd8ed61af64f] Running
	I1030 23:03:11.123748  216380 system_pods.go:89] "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 23:03:11.123759  216380 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1030 23:03:11.123765  216380 system_pods.go:89] "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 23:03:11.123770  216380 system_pods.go:89] "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 23:03:11.123780  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-699z7" [94d02eb3-de06-4a38-838a-901acddc9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:11.123787  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-q8jtn" [8516a697-a645-4cc0-801b-784b5c9301ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:11.123795  216380 system_pods.go:89] "storage-provisioner" [a24f119a-67f4-40e1-9c52-12f734c702b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:03:11.123800  216380 system_pods.go:89] "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1030 23:03:11.123818  216380 retry.go:31] will retry after 640.030323ms: missing components: kube-proxy
	I1030 23:03:11.381708  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:11.393965  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:11.554271  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:11.784501  216380 system_pods.go:86] 18 kube-system pods found
	I1030 23:03:11.784538  216380 system_pods.go:89] "coredns-5dd5756b68-vnckz" [1dd885ee-b288-4507-900f-ebae0145f76c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:03:11.784547  216380 system_pods.go:89] "csi-hostpath-attacher-0" [5e89bfbd-f218-4afc-80e7-ca412afa1b3f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1030 23:03:11.784554  216380 system_pods.go:89] "csi-hostpath-resizer-0" [a2b68e02-76fb-44be-a68d-5f5408bab6ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1030 23:03:11.784560  216380 system_pods.go:89] "csi-hostpathplugin-5chqs" [d81809ea-7714-417d-b595-f4baba970354] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1030 23:03:11.784565  216380 system_pods.go:89] "etcd-addons-780757" [32eb9ec4-3cfd-4ee2-a6e6-9dcb263804b3] Running
	I1030 23:03:11.784569  216380 system_pods.go:89] "kube-apiserver-addons-780757" [bcf74483-d701-45e5-88f5-b185f5cc013d] Running
	I1030 23:03:11.784573  216380 system_pods.go:89] "kube-controller-manager-addons-780757" [88856819-625c-49f3-96f3-7029b3c3d18b] Running
	I1030 23:03:11.784580  216380 system_pods.go:89] "kube-ingress-dns-minikube" [7d8b030a-8d89-47c3-87b3-fa6b3676c8ce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1030 23:03:11.784584  216380 system_pods.go:89] "kube-proxy-2s8wh" [b9f4faf7-a1f8-4050-ac2e-deb5767caa4d] Running
	I1030 23:03:11.784588  216380 system_pods.go:89] "kube-scheduler-addons-780757" [268eccf9-0286-4647-9d83-cd8ed61af64f] Running
	I1030 23:03:11.784593  216380 system_pods.go:89] "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1030 23:03:11.784600  216380 system_pods.go:89] "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1030 23:03:11.784606  216380 system_pods.go:89] "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1030 23:03:11.784614  216380 system_pods.go:89] "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1030 23:03:11.784621  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-699z7" [94d02eb3-de06-4a38-838a-901acddc9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:11.784631  216380 system_pods.go:89] "snapshot-controller-58dbcc7b99-q8jtn" [8516a697-a645-4cc0-801b-784b5c9301ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1030 23:03:11.784637  216380 system_pods.go:89] "storage-provisioner" [a24f119a-67f4-40e1-9c52-12f734c702b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:03:11.784644  216380 system_pods.go:89] "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1030 23:03:11.784652  216380 system_pods.go:126] duration metric: took 2.900559071s to wait for k8s-apps to be running ...
	I1030 23:03:11.784668  216380 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 23:03:11.784715  216380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:03:11.907507  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:11.907905  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:12.056175  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:12.082436  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.112800489s)
	I1030 23:03:12.082500  216380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.001954721s)
	I1030 23:03:12.082523  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:12.082535  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:12.082501  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:12.082536  216380 system_svc.go:56] duration metric: took 297.863282ms WaitForService to wait for kubelet.
	I1030 23:03:12.082608  216380 kubeadm.go:581] duration metric: took 12.000624423s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1030 23:03:12.082653  216380 node_conditions.go:102] verifying NodePressure condition ...
	I1030 23:03:12.082586  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:12.082853  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:12.082870  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:12.082882  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:12.082882  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:12.082893  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:12.083039  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:12.083047  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:12.083059  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:12.083070  216380 main.go:141] libmachine: Making call to close driver server
	I1030 23:03:12.083082  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:12.083090  216380 main.go:141] libmachine: (addons-780757) Calling .Close
	I1030 23:03:12.083113  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:12.083090  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:12.083270  216380 main.go:141] libmachine: (addons-780757) DBG | Closing plugin on server side
	I1030 23:03:12.083298  216380 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:03:12.083314  216380 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:03:12.084728  216380 addons.go:467] Verifying addon gcp-auth=true in "addons-780757"
	I1030 23:03:12.086998  216380 out.go:177] * Verifying gcp-auth addon...
	I1030 23:03:12.088830  216380 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1030 23:03:12.098062  216380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:03:12.098085  216380 node_conditions.go:123] node cpu capacity is 2
	I1030 23:03:12.098097  216380 node_conditions.go:105] duration metric: took 15.43877ms to run NodePressure ...
	I1030 23:03:12.098108  216380 start.go:228] waiting for startup goroutines ...
	I1030 23:03:12.103987  216380 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1030 23:03:12.104001  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:12.114332  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:12.365487  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:12.386277  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:12.567657  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:12.618862  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:12.864018  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:12.888262  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:13.054777  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:13.118739  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:13.362679  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:13.385969  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:13.555434  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:13.618327  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:13.865942  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:13.886655  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:14.057074  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:14.120623  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:14.364624  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:14.385864  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:14.554455  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:14.618061  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:14.863949  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:14.888425  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:15.055372  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:15.118111  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:15.363801  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:15.385207  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:15.554760  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:15.618368  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:15.863106  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:15.887594  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:16.054172  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:16.118350  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:16.362619  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:16.386207  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:16.555251  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:16.620060  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:16.868741  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:16.888229  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:17.054875  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:17.119416  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:17.370023  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:17.415726  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:17.573103  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:17.619002  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:17.894618  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:17.894726  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:18.061350  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:18.118681  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:18.366142  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:18.400703  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:18.556481  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:18.633057  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:18.863534  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:18.885422  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:19.074785  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:19.129313  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:19.376370  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:19.394817  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:19.556663  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:19.619110  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:19.863531  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:19.886446  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:20.057918  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:20.119246  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:20.367995  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:20.385589  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:20.557814  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:20.620964  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:20.870567  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:20.888640  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:21.055653  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:21.118295  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:21.366819  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:21.393042  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:21.554226  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:21.619864  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:21.863925  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:21.886122  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:22.061299  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:22.119105  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:22.363564  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:22.385960  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:22.554605  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:22.618688  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:22.862322  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:22.885877  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:23.055546  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:23.118793  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:23.363859  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:23.386762  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:23.559475  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:23.618672  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:23.863315  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:23.885567  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:24.054150  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:24.119533  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:24.363008  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:24.386010  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:24.554390  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:24.618568  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:24.862185  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:24.884698  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:25.054530  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:25.120861  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:25.363360  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:25.385364  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:25.553703  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:25.618876  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:25.869334  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:25.885980  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:26.054898  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:26.119523  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:26.363581  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:26.386553  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:26.555021  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:26.619009  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:26.863310  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:26.887026  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:27.066349  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:27.128421  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:27.364115  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:27.386421  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:27.555927  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:27.620300  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:27.863675  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:27.889342  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:28.056282  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:28.118934  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:28.364390  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:28.388452  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:28.555025  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:28.618724  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:28.863855  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:28.890319  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:29.058138  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:29.118534  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:29.365514  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:29.561482  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:29.582601  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:29.618581  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:29.863983  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:29.897628  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:30.067678  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:30.122152  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:30.364666  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:30.385752  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:30.557827  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:30.619485  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:30.863846  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:30.887521  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:31.054831  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:31.118616  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:31.366993  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:31.386876  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:31.555094  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:31.619480  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:31.865220  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:31.885291  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:32.055134  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:32.119342  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:32.368305  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:32.385653  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:32.560620  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:32.618358  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:32.864053  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:32.885771  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:33.054759  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:33.118767  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:33.363387  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:33.385959  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:33.554959  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:33.618949  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:33.864313  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:33.885599  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:34.059204  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:34.121082  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:34.363563  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:34.386750  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:34.554751  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:34.619020  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:34.863643  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:34.891358  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:35.406313  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:35.408515  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:35.408543  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:35.408761  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:35.553567  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:35.619201  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:35.863581  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:35.886405  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:36.053588  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:36.119298  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:36.362791  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:36.386015  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:36.554178  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:36.618855  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:36.864068  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:36.898442  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:37.056039  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:37.119959  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:37.366927  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:37.386659  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:37.639672  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:37.640652  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:37.863572  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:37.886589  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:38.060271  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:38.118516  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:38.363226  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:38.385351  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:38.554686  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:38.619531  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:38.862989  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:38.887427  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:39.059481  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:39.118605  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:39.363664  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:39.387882  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:39.557553  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:39.636381  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:39.868576  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:39.900365  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:40.055453  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:40.118754  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:40.363541  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:40.392845  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:40.558207  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:40.622619  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:40.863075  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:40.886510  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:41.062502  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:41.119624  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:41.363905  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:41.387636  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:41.554985  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:41.619426  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:41.864430  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:41.885911  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:42.054542  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:42.119483  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:42.364782  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:42.388762  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:42.554715  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:42.618321  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:42.864280  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:42.885350  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:43.055678  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:43.119939  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:43.368785  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:43.390967  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:43.554519  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:43.618556  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:43.866234  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:43.885421  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:44.054601  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:44.119711  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:44.364608  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:44.386904  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:44.554995  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:44.619862  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:44.865518  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:44.885198  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:45.054979  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:45.119321  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:45.371783  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:45.391775  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:45.556492  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:45.617859  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:45.869032  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:45.890324  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:46.054889  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:46.118708  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:46.365174  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:46.387404  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:46.571757  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:46.621205  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:46.869975  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:46.885158  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:47.054148  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:47.118352  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:47.362914  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:47.397984  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:47.564992  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:47.620103  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:47.884271  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:47.890799  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:48.079453  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:48.120678  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:48.363955  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:48.385781  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:48.560022  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:48.618945  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:48.867861  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:48.886867  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:49.056752  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:49.118579  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:49.363020  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:49.387251  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:49.554536  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:49.618858  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:49.865760  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:49.885488  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:50.054304  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:50.119070  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:50.364259  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:50.385006  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:50.554439  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:50.618309  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:50.866162  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:50.893307  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:51.054916  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:51.119112  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:51.363514  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:51.385858  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:51.555049  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:51.618664  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:51.863595  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:51.885154  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:52.054921  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:52.118711  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:52.363489  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:52.385279  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:52.555015  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:52.618440  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:52.862697  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:52.885462  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:53.054747  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:53.119523  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:53.362641  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:53.385487  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:53.554831  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:53.619601  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:53.863708  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:53.885627  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:54.367913  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:54.370030  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:54.370357  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:54.391364  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:54.553966  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:54.619129  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:54.863421  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:54.886121  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:55.054952  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:55.123109  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:55.363556  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:55.385433  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:55.553601  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:55.618668  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:55.863210  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:55.887304  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:56.055617  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:56.118634  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:56.363961  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:56.543228  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:56.565223  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:56.618988  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:56.863732  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:56.886825  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:57.054345  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:57.119268  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:57.363523  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:57.388273  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:57.555976  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:57.622426  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:57.863758  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:57.886307  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:58.055454  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:58.118430  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:58.364603  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:58.386381  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:58.557306  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:58.618226  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:58.863998  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:58.886313  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:59.055755  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:59.118756  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:59.379721  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:59.385431  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:03:59.555304  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:03:59.619517  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:03:59.870794  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:03:59.887370  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:00.055998  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:00.122605  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:00.372195  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:00.390492  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:00.553937  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:00.620233  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:00.863624  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:00.886022  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:01.056224  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:01.119505  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:01.364811  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:01.395348  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:01.557495  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:01.618183  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:01.863695  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:01.886833  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:02.054441  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:02.118790  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:02.363737  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:02.387766  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:03.006153  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:03.006184  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:03.006650  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:03.010365  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:03.053869  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:03.119743  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:03.363163  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:03.389761  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:03.578302  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:03.641024  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:03.864899  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:03.887082  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:04.053990  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:04.118067  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:04.364655  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:04.385953  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:04.556681  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:04.619152  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:04.864559  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:04.885678  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:05.056713  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:05.118528  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:05.363686  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:05.389452  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:05.560676  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:05.619570  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:05.864282  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:05.885456  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:06.054577  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:06.118453  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:06.362755  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:06.385342  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1030 23:04:06.553292  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:06.621706  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:06.864115  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:06.886279  216380 kapi.go:107] duration metric: took 58.074777527s to wait for kubernetes.io/minikube-addons=registry ...
	I1030 23:04:07.057079  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:07.121369  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:07.655778  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:07.657579  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:07.660187  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:07.863353  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:08.054897  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:08.119253  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:08.363910  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:08.556550  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:08.623805  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:08.864011  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:09.053958  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:09.121074  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:09.363924  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:09.554722  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:09.624737  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:09.863327  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:10.053686  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:10.118525  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:10.362830  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:10.554085  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:10.632876  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:10.863323  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:11.055738  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:11.118740  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:11.363881  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:11.555733  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:11.621276  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:11.865188  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:12.055397  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:12.118587  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:12.363159  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:12.553993  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:12.624130  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:12.866770  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:13.059881  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:13.118139  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:13.364089  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:13.555680  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:13.629734  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:13.863876  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:14.053802  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:14.124568  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:14.377646  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:14.556783  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:14.618981  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:14.864775  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:15.053710  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:15.118853  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:15.363653  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:15.556572  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:15.619632  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:15.863018  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:16.055900  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:16.119388  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:16.364638  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:16.557883  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:16.619993  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:16.890055  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:17.058985  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:17.125292  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:17.364237  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:17.555388  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:17.618510  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:17.863224  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:18.055414  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:18.119089  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:18.364635  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:18.560145  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:18.618643  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:18.863994  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:19.055614  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:19.119299  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:19.363343  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:19.554736  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:19.624503  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:19.864172  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:20.057696  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:20.118809  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:20.699480  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:20.703391  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:20.704825  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:20.868201  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:21.086984  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:21.122287  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:21.364294  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:21.554737  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:21.619398  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:21.862823  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:22.053947  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:22.118811  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:22.363877  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:22.554266  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:22.622847  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:22.863832  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:23.073398  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:23.119660  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:23.363215  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:23.559065  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:23.632546  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:23.863429  216380 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1030 23:04:24.062551  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:24.118314  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:24.364193  216380 kapi.go:107] duration metric: took 1m15.556117428s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1030 23:04:24.555663  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:24.618092  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:25.307875  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:25.308191  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:25.555976  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:25.618333  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:26.058880  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:26.123420  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:26.554756  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:26.619201  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:27.054097  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:27.121358  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:27.556405  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:27.618249  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1030 23:04:28.061331  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:28.118363  216380 kapi.go:107] duration metric: took 1m16.029524768s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1030 23:04:28.120316  216380 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-780757 cluster.
	I1030 23:04:28.121796  216380 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1030 23:04:28.123365  216380 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1030 23:04:28.579817  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:29.054312  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:29.567362  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:30.055683  216380 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1030 23:04:30.555195  216380 kapi.go:107] duration metric: took 1m20.576314002s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1030 23:04:30.557285  216380 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, inspektor-gadget, helm-tiller, metrics-server, ingress-dns, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1030 23:04:30.558795  216380 addons.go:502] enable addons completed in 1m30.554268786s: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin inspektor-gadget helm-tiller metrics-server ingress-dns storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1030 23:04:30.558875  216380 start.go:233] waiting for cluster config update ...
	I1030 23:04:30.558902  216380 start.go:242] writing updated cluster config ...
	I1030 23:04:30.559223  216380 ssh_runner.go:195] Run: rm -f paused
	I1030 23:04:30.617571  216380 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1030 23:04:30.619172  216380 out.go:177] * Done! kubectl is now configured to use "addons-780757" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-30 23:02:16 UTC, ends at Mon 2023-10-30 23:07:36 UTC. --
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.475622240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698707256475607659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529245,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=9a4843b6-8470-4ed7-a33a-bf7938dc72b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.476148543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c181d80f-e1db-4ea7-851d-d1bb18f5e87f name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.476189833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c181d80f-e1db-4ea7-851d-d1bb18f5e87f name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.476460648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea086859013617908b29d9f535282b72e7a22f0a3661893409f51b40428d2e05,PodSandboxId:87c18df5d2383dfc1837c74e30e393e16a1b109805d58aea6415b3861b14952a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698707247949914405,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-mdrvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2bb812e-fecf-415f-ab23-9980029eb82a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ecce9d4,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae54b896d7113d1ba1d9949913c2bcdf7c7afcc6c57a86dda07d1225d6b091ca,PodSandboxId:8154c4c1e29b72ea792e60dcfcd7e39bd0cdbbf80ba88ca6ff1b267bff3967bb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698707107427377726,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285680b9-b8c5-4686-af4a-42f41f4f3218,},Annotations:map[string]string{io.kubernet
es.container.hash: e2d7c608,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40104404a7758d996c8285badeeee69448959b6551b17406901069cf387c667,PodSandboxId:be9e3c09703588110c034b69ef8f0635a2f4f77ae6e9981cedc003d977fd5d68,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698707105055909709,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-wt7wf,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid:
e87c7130-8952-4faa-8018-6f1bd9b967cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8144c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085daaa6f86deb60711834720078848a3a19b108514a63509b8fc4f74954f795,PodSandboxId:900336d21b31d9e6a8efb7a6e00649633c057482bb0cc7a963c3023db6cdd513,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698707067287917409,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-8gmw6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8d174bbf-610c-4968-8ce3-fa54927c0209,},Annotations:map[string]string{io.kubernetes.container.hash: 45ca833e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9d49062de0fbaca2d17b303a3f6d87f11ee2da06f6253cd48234f930601774,PodSandboxId:654ab99fdcc47627c35b6b4ee922b935a1ddf8199bdc793709d0da2352328131,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16987070566
40637093,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xrsfz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b0ff8da9-cff2-400c-ba7c-81cd96487427,},Annotations:map[string]string{io.kubernetes.container.hash: 6488ec2f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1bbdc09591faefdaced3e8a99d2475fa8567f72835bf6fc640ce839bc0d9227,PodSandboxId:2729bb16a7955ead9ea309ce2fafe83caa1911b83800cc49c1c58dc387101d41,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b
9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698707039420869883,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ngv8p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c66d6cf5-5613-4b31-979d-432e1e30c812,},Annotations:map[string]string{io.kubernetes.container.hash: c344266,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b19a1e9fcd5453bdcbcaccbfb19006f74b36dbf8c42e6004662b7d51b45baac,PodSandboxId:dedd96eeb53480f974d8e46874a6a4f19dc88a4e482e40e689386a77db5e414d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1d
ddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698706999076292104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a24f119a-67f4-40e1-9c52-12f734c702b4,},Annotations:map[string]string{io.kubernetes.container.hash: 394e9ab2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa714da1d92efa5e071dc8f80069201d934d886a8150fac559a24e96a974ff5f,PodSandboxId:381118843c077230d3634890fb33b75044447eaf3c75553328d422e0c782c284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:
CONTAINER_RUNNING,CreatedAt:1698706983400821353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s8wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f4faf7-a1f8-4050-ac2e-deb5767caa4d,},Annotations:map[string]string{io.kubernetes.container.hash: 3764baf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e63daa724ae25b880fddd914fab33ebf94aae440fb47800048aa8ac1e6d1b,PodSandboxId:d6d55dcd6dd60f101bffd7ce11613440e5f775e4416effb58f0dc742390d8dad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16987
06987538706825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vnckz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd885ee-b288-4507-900f-ebae0145f76c,},Annotations:map[string]string{io.kubernetes.container.hash: 220f91e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08781bf7e68bacc5bc7d81d0719dcd11eb80b4972a90a02426bf48a0d9b151f,PodSandboxId:07f49741fd08df2da42e495dfe26e88c5b2647353b90b30b239e932606b85b69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d2
6df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698706961255695312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340b19a0a7bfb40ecc1a947e77ed9b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55affadbf1d63703b86562ee41e06a54dbfe0d91edeb46268c199451124608a7,PodSandboxId:7c1cf0aa8ad353e2798dbe2221f6cc6eaaab4684b6642a520d34055fbca44cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698706960939224955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea66f2632aeb287e24875a4f7cfdc0f7,},Annotations:map[string]string{io.kubernetes.container.hash: ef7bb81b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98736f03fbd170395201c4931725f2f10ec1db829d51e542618c0f1a4b08fc11,PodSandboxId:48fa46b7ea80ff2d10006437c4a8cc80a8c4667784d7eb033936fa9bd1c5824e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry
.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698706960959905553,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d99509206fead8a2d570bb27198fc,},Annotations:map[string]string{io.kubernetes.container.hash: 833e487d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114427d98258ff5391a4c9add9f6fad9ccb7cdb8aa2604a3e0ed1d1395b40732,PodSandboxId:4bb7319397353e83ec8c7317eef9e24694936ef7f96d720e2cae1d059df501cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698706960660534333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b3d8610f80814af1920ed0b3d9583a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c181d80f-e1db-4ea7-851d-d1bb18f5e87f name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.517542606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=415ca6a9-0a88-4498-bcc5-e4b08bd39592 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.517598744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=415ca6a9-0a88-4498-bcc5-e4b08bd39592 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.519104361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7fea75be-8ffb-47ce-b98e-d63ecd513f39 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.520638780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698707256520623052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529245,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=7fea75be-8ffb-47ce-b98e-d63ecd513f39 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.521681772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c5c95a0d-66bd-446b-a2cb-6ee9cb8f192c name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.521731815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c5c95a0d-66bd-446b-a2cb-6ee9cb8f192c name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.522068777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea086859013617908b29d9f535282b72e7a22f0a3661893409f51b40428d2e05,PodSandboxId:87c18df5d2383dfc1837c74e30e393e16a1b109805d58aea6415b3861b14952a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698707247949914405,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-mdrvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2bb812e-fecf-415f-ab23-9980029eb82a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ecce9d4,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae54b896d7113d1ba1d9949913c2bcdf7c7afcc6c57a86dda07d1225d6b091ca,PodSandboxId:8154c4c1e29b72ea792e60dcfcd7e39bd0cdbbf80ba88ca6ff1b267bff3967bb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698707107427377726,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285680b9-b8c5-4686-af4a-42f41f4f3218,},Annotations:map[string]string{io.kubernet
es.container.hash: e2d7c608,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40104404a7758d996c8285badeeee69448959b6551b17406901069cf387c667,PodSandboxId:be9e3c09703588110c034b69ef8f0635a2f4f77ae6e9981cedc003d977fd5d68,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698707105055909709,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-wt7wf,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid:
e87c7130-8952-4faa-8018-6f1bd9b967cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8144c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085daaa6f86deb60711834720078848a3a19b108514a63509b8fc4f74954f795,PodSandboxId:900336d21b31d9e6a8efb7a6e00649633c057482bb0cc7a963c3023db6cdd513,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698707067287917409,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-8gmw6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8d174bbf-610c-4968-8ce3-fa54927c0209,},Annotations:map[string]string{io.kubernetes.container.hash: 45ca833e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9d49062de0fbaca2d17b303a3f6d87f11ee2da06f6253cd48234f930601774,PodSandboxId:654ab99fdcc47627c35b6b4ee922b935a1ddf8199bdc793709d0da2352328131,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16987070566
40637093,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xrsfz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b0ff8da9-cff2-400c-ba7c-81cd96487427,},Annotations:map[string]string{io.kubernetes.container.hash: 6488ec2f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1bbdc09591faefdaced3e8a99d2475fa8567f72835bf6fc640ce839bc0d9227,PodSandboxId:2729bb16a7955ead9ea309ce2fafe83caa1911b83800cc49c1c58dc387101d41,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b
9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698707039420869883,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ngv8p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c66d6cf5-5613-4b31-979d-432e1e30c812,},Annotations:map[string]string{io.kubernetes.container.hash: c344266,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b19a1e9fcd5453bdcbcaccbfb19006f74b36dbf8c42e6004662b7d51b45baac,PodSandboxId:dedd96eeb53480f974d8e46874a6a4f19dc88a4e482e40e689386a77db5e414d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1d
ddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698706999076292104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a24f119a-67f4-40e1-9c52-12f734c702b4,},Annotations:map[string]string{io.kubernetes.container.hash: 394e9ab2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa714da1d92efa5e071dc8f80069201d934d886a8150fac559a24e96a974ff5f,PodSandboxId:381118843c077230d3634890fb33b75044447eaf3c75553328d422e0c782c284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:
CONTAINER_RUNNING,CreatedAt:1698706983400821353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s8wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f4faf7-a1f8-4050-ac2e-deb5767caa4d,},Annotations:map[string]string{io.kubernetes.container.hash: 3764baf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e63daa724ae25b880fddd914fab33ebf94aae440fb47800048aa8ac1e6d1b,PodSandboxId:d6d55dcd6dd60f101bffd7ce11613440e5f775e4416effb58f0dc742390d8dad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16987
06987538706825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vnckz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd885ee-b288-4507-900f-ebae0145f76c,},Annotations:map[string]string{io.kubernetes.container.hash: 220f91e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08781bf7e68bacc5bc7d81d0719dcd11eb80b4972a90a02426bf48a0d9b151f,PodSandboxId:07f49741fd08df2da42e495dfe26e88c5b2647353b90b30b239e932606b85b69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d2
6df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698706961255695312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340b19a0a7bfb40ecc1a947e77ed9b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55affadbf1d63703b86562ee41e06a54dbfe0d91edeb46268c199451124608a7,PodSandboxId:7c1cf0aa8ad353e2798dbe2221f6cc6eaaab4684b6642a520d34055fbca44cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698706960939224955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea66f2632aeb287e24875a4f7cfdc0f7,},Annotations:map[string]string{io.kubernetes.container.hash: ef7bb81b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98736f03fbd170395201c4931725f2f10ec1db829d51e542618c0f1a4b08fc11,PodSandboxId:48fa46b7ea80ff2d10006437c4a8cc80a8c4667784d7eb033936fa9bd1c5824e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry
.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698706960959905553,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d99509206fead8a2d570bb27198fc,},Annotations:map[string]string{io.kubernetes.container.hash: 833e487d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114427d98258ff5391a4c9add9f6fad9ccb7cdb8aa2604a3e0ed1d1395b40732,PodSandboxId:4bb7319397353e83ec8c7317eef9e24694936ef7f96d720e2cae1d059df501cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698706960660534333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b3d8610f80814af1920ed0b3d9583a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c5c95a0d-66bd-446b-a2cb-6ee9cb8f192c name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.558406237Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9a20d45b-e3b8-41e1-88ab-faf29ddda3c8 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.558464589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9a20d45b-e3b8-41e1-88ab-faf29ddda3c8 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.560298338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=834d47f7-a134-4b9d-8d6c-f1b862639966 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.561480594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698707256561465298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529245,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=834d47f7-a134-4b9d-8d6c-f1b862639966 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.562116327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bbe44ece-b757-4c69-8ced-c1eeb657f09d name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.562162070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bbe44ece-b757-4c69-8ced-c1eeb657f09d name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.562444858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea086859013617908b29d9f535282b72e7a22f0a3661893409f51b40428d2e05,PodSandboxId:87c18df5d2383dfc1837c74e30e393e16a1b109805d58aea6415b3861b14952a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698707247949914405,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-mdrvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2bb812e-fecf-415f-ab23-9980029eb82a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ecce9d4,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae54b896d7113d1ba1d9949913c2bcdf7c7afcc6c57a86dda07d1225d6b091ca,PodSandboxId:8154c4c1e29b72ea792e60dcfcd7e39bd0cdbbf80ba88ca6ff1b267bff3967bb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698707107427377726,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285680b9-b8c5-4686-af4a-42f41f4f3218,},Annotations:map[string]string{io.kubernet
es.container.hash: e2d7c608,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40104404a7758d996c8285badeeee69448959b6551b17406901069cf387c667,PodSandboxId:be9e3c09703588110c034b69ef8f0635a2f4f77ae6e9981cedc003d977fd5d68,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698707105055909709,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-wt7wf,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid:
e87c7130-8952-4faa-8018-6f1bd9b967cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8144c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085daaa6f86deb60711834720078848a3a19b108514a63509b8fc4f74954f795,PodSandboxId:900336d21b31d9e6a8efb7a6e00649633c057482bb0cc7a963c3023db6cdd513,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698707067287917409,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-8gmw6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8d174bbf-610c-4968-8ce3-fa54927c0209,},Annotations:map[string]string{io.kubernetes.container.hash: 45ca833e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9d49062de0fbaca2d17b303a3f6d87f11ee2da06f6253cd48234f930601774,PodSandboxId:654ab99fdcc47627c35b6b4ee922b935a1ddf8199bdc793709d0da2352328131,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16987070566
40637093,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xrsfz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b0ff8da9-cff2-400c-ba7c-81cd96487427,},Annotations:map[string]string{io.kubernetes.container.hash: 6488ec2f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1bbdc09591faefdaced3e8a99d2475fa8567f72835bf6fc640ce839bc0d9227,PodSandboxId:2729bb16a7955ead9ea309ce2fafe83caa1911b83800cc49c1c58dc387101d41,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b
9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698707039420869883,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ngv8p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c66d6cf5-5613-4b31-979d-432e1e30c812,},Annotations:map[string]string{io.kubernetes.container.hash: c344266,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b19a1e9fcd5453bdcbcaccbfb19006f74b36dbf8c42e6004662b7d51b45baac,PodSandboxId:dedd96eeb53480f974d8e46874a6a4f19dc88a4e482e40e689386a77db5e414d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1d
ddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698706999076292104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a24f119a-67f4-40e1-9c52-12f734c702b4,},Annotations:map[string]string{io.kubernetes.container.hash: 394e9ab2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa714da1d92efa5e071dc8f80069201d934d886a8150fac559a24e96a974ff5f,PodSandboxId:381118843c077230d3634890fb33b75044447eaf3c75553328d422e0c782c284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:
CONTAINER_RUNNING,CreatedAt:1698706983400821353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s8wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f4faf7-a1f8-4050-ac2e-deb5767caa4d,},Annotations:map[string]string{io.kubernetes.container.hash: 3764baf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e63daa724ae25b880fddd914fab33ebf94aae440fb47800048aa8ac1e6d1b,PodSandboxId:d6d55dcd6dd60f101bffd7ce11613440e5f775e4416effb58f0dc742390d8dad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16987
06987538706825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vnckz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd885ee-b288-4507-900f-ebae0145f76c,},Annotations:map[string]string{io.kubernetes.container.hash: 220f91e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08781bf7e68bacc5bc7d81d0719dcd11eb80b4972a90a02426bf48a0d9b151f,PodSandboxId:07f49741fd08df2da42e495dfe26e88c5b2647353b90b30b239e932606b85b69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d2
6df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698706961255695312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340b19a0a7bfb40ecc1a947e77ed9b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55affadbf1d63703b86562ee41e06a54dbfe0d91edeb46268c199451124608a7,PodSandboxId:7c1cf0aa8ad353e2798dbe2221f6cc6eaaab4684b6642a520d34055fbca44cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698706960939224955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea66f2632aeb287e24875a4f7cfdc0f7,},Annotations:map[string]string{io.kubernetes.container.hash: ef7bb81b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98736f03fbd170395201c4931725f2f10ec1db829d51e542618c0f1a4b08fc11,PodSandboxId:48fa46b7ea80ff2d10006437c4a8cc80a8c4667784d7eb033936fa9bd1c5824e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry
.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698706960959905553,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d99509206fead8a2d570bb27198fc,},Annotations:map[string]string{io.kubernetes.container.hash: 833e487d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114427d98258ff5391a4c9add9f6fad9ccb7cdb8aa2604a3e0ed1d1395b40732,PodSandboxId:4bb7319397353e83ec8c7317eef9e24694936ef7f96d720e2cae1d059df501cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698706960660534333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b3d8610f80814af1920ed0b3d9583a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bbe44ece-b757-4c69-8ced-c1eeb657f09d name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.608072633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a3ccfd1a-345e-4a2b-817d-36142da4a84f name=/runtime.v1.RuntimeService/Version
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.608134430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a3ccfd1a-345e-4a2b-817d-36142da4a84f name=/runtime.v1.RuntimeService/Version
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.609790038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a8a66c5c-e2bd-4d15-b4d6-5da9cb3921c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.611304398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698707256611287436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529245,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=a8a66c5c-e2bd-4d15-b4d6-5da9cb3921c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.612180638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46f81115-1743-44c3-9a2b-f602e3690eaf name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.612324502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46f81115-1743-44c3-9a2b-f602e3690eaf name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:07:36 addons-780757 crio[710]: time="2023-10-30 23:07:36.612621354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea086859013617908b29d9f535282b72e7a22f0a3661893409f51b40428d2e05,PodSandboxId:87c18df5d2383dfc1837c74e30e393e16a1b109805d58aea6415b3861b14952a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698707247949914405,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-mdrvn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2bb812e-fecf-415f-ab23-9980029eb82a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ecce9d4,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae54b896d7113d1ba1d9949913c2bcdf7c7afcc6c57a86dda07d1225d6b091ca,PodSandboxId:8154c4c1e29b72ea792e60dcfcd7e39bd0cdbbf80ba88ca6ff1b267bff3967bb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698707107427377726,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285680b9-b8c5-4686-af4a-42f41f4f3218,},Annotations:map[string]string{io.kubernet
es.container.hash: e2d7c608,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40104404a7758d996c8285badeeee69448959b6551b17406901069cf387c667,PodSandboxId:be9e3c09703588110c034b69ef8f0635a2f4f77ae6e9981cedc003d977fd5d68,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698707105055909709,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-wt7wf,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid:
e87c7130-8952-4faa-8018-6f1bd9b967cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8144c3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085daaa6f86deb60711834720078848a3a19b108514a63509b8fc4f74954f795,PodSandboxId:900336d21b31d9e6a8efb7a6e00649633c057482bb0cc7a963c3023db6cdd513,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698707067287917409,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-8gmw6,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8d174bbf-610c-4968-8ce3-fa54927c0209,},Annotations:map[string]string{io.kubernetes.container.hash: 45ca833e,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9d49062de0fbaca2d17b303a3f6d87f11ee2da06f6253cd48234f930601774,PodSandboxId:654ab99fdcc47627c35b6b4ee922b935a1ddf8199bdc793709d0da2352328131,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:16987070566
40637093,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xrsfz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b0ff8da9-cff2-400c-ba7c-81cd96487427,},Annotations:map[string]string{io.kubernetes.container.hash: 6488ec2f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1bbdc09591faefdaced3e8a99d2475fa8567f72835bf6fc640ce839bc0d9227,PodSandboxId:2729bb16a7955ead9ea309ce2fafe83caa1911b83800cc49c1c58dc387101d41,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b
9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698707039420869883,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ngv8p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c66d6cf5-5613-4b31-979d-432e1e30c812,},Annotations:map[string]string{io.kubernetes.container.hash: c344266,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b19a1e9fcd5453bdcbcaccbfb19006f74b36dbf8c42e6004662b7d51b45baac,PodSandboxId:dedd96eeb53480f974d8e46874a6a4f19dc88a4e482e40e689386a77db5e414d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1d
ddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698706999076292104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a24f119a-67f4-40e1-9c52-12f734c702b4,},Annotations:map[string]string{io.kubernetes.container.hash: 394e9ab2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa714da1d92efa5e071dc8f80069201d934d886a8150fac559a24e96a974ff5f,PodSandboxId:381118843c077230d3634890fb33b75044447eaf3c75553328d422e0c782c284,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:
CONTAINER_RUNNING,CreatedAt:1698706983400821353,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2s8wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f4faf7-a1f8-4050-ac2e-deb5767caa4d,},Annotations:map[string]string{io.kubernetes.container.hash: 3764baf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e63daa724ae25b880fddd914fab33ebf94aae440fb47800048aa8ac1e6d1b,PodSandboxId:d6d55dcd6dd60f101bffd7ce11613440e5f775e4416effb58f0dc742390d8dad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16987
06987538706825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vnckz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd885ee-b288-4507-900f-ebae0145f76c,},Annotations:map[string]string{io.kubernetes.container.hash: 220f91e6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08781bf7e68bacc5bc7d81d0719dcd11eb80b4972a90a02426bf48a0d9b151f,PodSandboxId:07f49741fd08df2da42e495dfe26e88c5b2647353b90b30b239e932606b85b69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d2
6df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698706961255695312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e340b19a0a7bfb40ecc1a947e77ed9b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55affadbf1d63703b86562ee41e06a54dbfe0d91edeb46268c199451124608a7,PodSandboxId:7c1cf0aa8ad353e2798dbe2221f6cc6eaaab4684b6642a520d34055fbca44cf4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698706960939224955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea66f2632aeb287e24875a4f7cfdc0f7,},Annotations:map[string]string{io.kubernetes.container.hash: ef7bb81b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98736f03fbd170395201c4931725f2f10ec1db829d51e542618c0f1a4b08fc11,PodSandboxId:48fa46b7ea80ff2d10006437c4a8cc80a8c4667784d7eb033936fa9bd1c5824e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry
.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698706960959905553,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d99509206fead8a2d570bb27198fc,},Annotations:map[string]string{io.kubernetes.container.hash: 833e487d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114427d98258ff5391a4c9add9f6fad9ccb7cdb8aa2604a3e0ed1d1395b40732,PodSandboxId:4bb7319397353e83ec8c7317eef9e24694936ef7f96d720e2cae1d059df501cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698706960660534333,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-780757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52b3d8610f80814af1920ed0b3d9583a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46f81115-1743-44c3-9a2b-f602e3690eaf name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea08685901361       gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d                      8 seconds ago       Running             hello-world-app           0                   87c18df5d2383       hello-world-app-5d77478584-mdrvn
	ae54b896d7113       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   8154c4c1e29b7       nginx
	e40104404a775       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4                        2 minutes ago       Running             headlamp                  0                   be9e3c0970358       headlamp-94b766c-wt7wf
	085daaa6f86de       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   900336d21b31d       gcp-auth-d4c87556c-8gmw6
	7c9d49062de0f       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     3                   654ab99fdcc47       ingress-nginx-admission-patch-xrsfz
	c1bbdc09591fa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   2729bb16a7955       ingress-nginx-admission-create-ngv8p
	4b19a1e9fcd54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   dedd96eeb5348       storage-provisioner
	720e63daa724a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   d6d55dcd6dd60       coredns-5dd5756b68-vnckz
	fa714da1d92ef       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             4 minutes ago       Running             kube-proxy                0                   381118843c077       kube-proxy-2s8wh
	f08781bf7e68b       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             4 minutes ago       Running             kube-scheduler            0                   07f49741fd08d       kube-scheduler-addons-780757
	98736f03fbd17       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             4 minutes ago       Running             kube-apiserver            0                   48fa46b7ea80f       kube-apiserver-addons-780757
	55affadbf1d63       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   7c1cf0aa8ad35       etcd-addons-780757
	114427d98258f       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             4 minutes ago       Running             kube-controller-manager   0                   4bb7319397353       kube-controller-manager-addons-780757
	
	* 
	* ==> coredns [720e63daa724ae25b880fddd914fab33ebf94aae440fb47800048aa8ac1e6d1b] <==
	* [INFO] 10.244.0.8:60931 - 36537 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000381903s
	[INFO] 10.244.0.8:42180 - 54913 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001171s
	[INFO] 10.244.0.8:42180 - 11772 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059481s
	[INFO] 10.244.0.8:51831 - 34910 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000142319s
	[INFO] 10.244.0.8:51831 - 38744 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067013s
	[INFO] 10.244.0.8:40971 - 57828 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000172938s
	[INFO] 10.244.0.8:40971 - 25318 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144903s
	[INFO] 10.244.0.8:49039 - 35778 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114959s
	[INFO] 10.244.0.8:49039 - 26561 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089712s
	[INFO] 10.244.0.8:48887 - 42988 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083563s
	[INFO] 10.244.0.8:48887 - 29678 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068301s
	[INFO] 10.244.0.8:52984 - 39407 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075059s
	[INFO] 10.244.0.8:52984 - 34029 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003188s
	[INFO] 10.244.0.8:40769 - 19224 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076098s
	[INFO] 10.244.0.8:40769 - 28442 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094269s
	[INFO] 10.244.0.20:35047 - 38208 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00042517s
	[INFO] 10.244.0.20:56182 - 26273 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000137815s
	[INFO] 10.244.0.20:52115 - 9568 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123818s
	[INFO] 10.244.0.20:40779 - 3893 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014589s
	[INFO] 10.244.0.20:47443 - 10036 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000166304s
	[INFO] 10.244.0.20:44589 - 10817 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113146s
	[INFO] 10.244.0.20:45614 - 36065 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.00067598s
	[INFO] 10.244.0.20:51677 - 38625 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0004812s
	[INFO] 10.244.0.23:32882 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000427298s
	[INFO] 10.244.0.23:58178 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000292004s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-780757
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-780757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=addons-780757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_30T23_02_48_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-780757
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:02:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-780757
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Oct 2023 23:07:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Oct 2023 23:05:22 +0000   Mon, 30 Oct 2023 23:02:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Oct 2023 23:05:22 +0000   Mon, 30 Oct 2023 23:02:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Oct 2023 23:05:22 +0000   Mon, 30 Oct 2023 23:02:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Oct 2023 23:05:22 +0000   Mon, 30 Oct 2023 23:02:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    addons-780757
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0c84a22756d49fb8cabfdf4f677b9db
	  System UUID:                c0c84a22-756d-49fb-8cab-fdf4f677b9db
	  Boot ID:                    55d3aa6c-103c-4639-873c-9d279e58dba6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-mdrvn         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  gcp-auth                    gcp-auth-d4c87556c-8gmw6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  headlamp                    headlamp-94b766c-wt7wf                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 coredns-5dd5756b68-vnckz                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m35s
	  kube-system                 etcd-addons-780757                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-apiserver-addons-780757             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-controller-manager-addons-780757    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-proxy-2s8wh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-scheduler-addons-780757             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  Starting                 4m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node addons-780757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node addons-780757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x7 over 4m57s)  kubelet          Node addons-780757 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m48s                  kubelet          Node addons-780757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s                  kubelet          Node addons-780757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s                  kubelet          Node addons-780757 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m48s                  kubelet          Node addons-780757 status is now: NodeReady
	  Normal  RegisteredNode           4m37s                  node-controller  Node addons-780757 event: Registered Node addons-780757 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.150536] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.024961] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.375475] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.102927] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.154106] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.119403] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.224869] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[  +9.654294] systemd-fstab-generator[905]: Ignoring "noauto" for root device
	[  +8.741850] systemd-fstab-generator[1236]: Ignoring "noauto" for root device
	[Oct30 23:03] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.138215] kauditd_printk_skb: 30 callbacks suppressed
	[ +18.212634] kauditd_printk_skb: 16 callbacks suppressed
	[ +15.494081] kauditd_printk_skb: 16 callbacks suppressed
	[Oct30 23:04] kauditd_printk_skb: 3 callbacks suppressed
	[ +21.145420] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.076731] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.784522] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.536152] kauditd_printk_skb: 6 callbacks suppressed
	[Oct30 23:05] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.153483] kauditd_printk_skb: 1 callbacks suppressed
	[ +31.202645] kauditd_printk_skb: 12 callbacks suppressed
	[Oct30 23:07] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [55affadbf1d63703b86562ee41e06a54dbfe0d91edeb46268c199451124608a7] <==
	* {"level":"info","ts":"2023-10-30T23:04:20.694077Z","caller":"traceutil/trace.go:171","msg":"trace[1051170097] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:1142; }","duration":"260.632697ms","start":"2023-10-30T23:04:20.433437Z","end":"2023-10-30T23:04:20.69407Z","steps":["trace[1051170097] 'count revisions from in-memory index tree'  (duration: 259.939035ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-30T23:04:20.689923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.73374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82389"}
	{"level":"info","ts":"2023-10-30T23:04:20.694203Z","caller":"traceutil/trace.go:171","msg":"trace[466226285] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1142; }","duration":"147.075181ms","start":"2023-10-30T23:04:20.547121Z","end":"2023-10-30T23:04:20.694196Z","steps":["trace[466226285] 'range keys from in-memory index tree'  (duration: 142.283078ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-30T23:04:25.294287Z","caller":"traceutil/trace.go:171","msg":"trace[1349664684] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"324.743847ms","start":"2023-10-30T23:04:24.969521Z","end":"2023-10-30T23:04:25.294265Z","steps":["trace[1349664684] 'process raft request'  (duration: 324.193109ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-30T23:04:25.294413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-30T23:04:24.969486Z","time spent":"324.877194ms","remote":"127.0.0.1:51208","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":732,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-7c6974c4d8-jgz87.1793051350c11686\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-7c6974c4d8-jgz87.1793051350c11686\" value_size:625 lease:3769947757091584232 >> failure:<>"}
	{"level":"info","ts":"2023-10-30T23:04:25.294186Z","caller":"traceutil/trace.go:171","msg":"trace[58070916] linearizableReadLoop","detail":"{readStateIndex:1201; appliedIndex:1200; }","duration":"247.060582ms","start":"2023-10-30T23:04:25.047111Z","end":"2023-10-30T23:04:25.294172Z","steps":["trace[58070916] 'read index received'  (duration: 246.557696ms)","trace[58070916] 'applied index is now lower than readState.Index'  (duration: 502.142µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-30T23:04:25.295726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.544681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82389"}
	{"level":"info","ts":"2023-10-30T23:04:25.295866Z","caller":"traceutil/trace.go:171","msg":"trace[1063140277] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1169; }","duration":"248.769725ms","start":"2023-10-30T23:04:25.047087Z","end":"2023-10-30T23:04:25.295857Z","steps":["trace[1063140277] 'agreement among raft nodes before linearized reading'  (duration: 248.010947ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-30T23:04:25.297446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.098483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10951"}
	{"level":"info","ts":"2023-10-30T23:04:25.300407Z","caller":"traceutil/trace.go:171","msg":"trace[79593676] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1169; }","duration":"186.062867ms","start":"2023-10-30T23:04:25.114335Z","end":"2023-10-30T23:04:25.300398Z","steps":["trace[79593676] 'agreement among raft nodes before linearized reading'  (duration: 183.05412ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-30T23:04:25.297479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.430746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-30T23:04:25.300642Z","caller":"traceutil/trace.go:171","msg":"trace[305788548] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1169; }","duration":"126.586973ms","start":"2023-10-30T23:04:25.174045Z","end":"2023-10-30T23:04:25.300632Z","steps":["trace[305788548] 'agreement among raft nodes before linearized reading'  (duration: 123.417422ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-30T23:05:01.336925Z","caller":"traceutil/trace.go:171","msg":"trace[805629879] transaction","detail":"{read_only:false; response_revision:1510; number_of_response:1; }","duration":"204.436673ms","start":"2023-10-30T23:05:01.132452Z","end":"2023-10-30T23:05:01.336888Z","steps":["trace[805629879] 'process raft request'  (duration: 204.282977ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-30T23:05:12.100716Z","caller":"traceutil/trace.go:171","msg":"trace[234885217] transaction","detail":"{read_only:false; response_revision:1557; number_of_response:1; }","duration":"432.938588ms","start":"2023-10-30T23:05:11.667756Z","end":"2023-10-30T23:05:12.100695Z","steps":["trace[234885217] 'process raft request'  (duration: 432.793282ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-30T23:05:12.101207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-30T23:05:11.667726Z","time spent":"433.238648ms","remote":"127.0.0.1:51228","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1554 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-10-30T23:05:12.101334Z","caller":"traceutil/trace.go:171","msg":"trace[1178836348] linearizableReadLoop","detail":"{readStateIndex:1609; appliedIndex:1609; }","duration":"422.529685ms","start":"2023-10-30T23:05:11.67879Z","end":"2023-10-30T23:05:12.10132Z","steps":["trace[1178836348] 'read index received'  (duration: 422.524114ms)","trace[1178836348] 'applied index is now lower than readState.Index'  (duration: 4.598µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-30T23:05:12.101552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"422.774497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5638"}
	{"level":"info","ts":"2023-10-30T23:05:12.101607Z","caller":"traceutil/trace.go:171","msg":"trace[455098400] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1557; }","duration":"422.838094ms","start":"2023-10-30T23:05:11.678762Z","end":"2023-10-30T23:05:12.1016Z","steps":["trace[455098400] 'agreement among raft nodes before linearized reading'  (duration: 422.71272ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-30T23:05:12.101631Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-30T23:05:11.678746Z","time spent":"422.878987ms","remote":"127.0.0.1:51232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":5662,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-10-30T23:05:12.101944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.039784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5638"}
	{"level":"info","ts":"2023-10-30T23:05:12.101965Z","caller":"traceutil/trace.go:171","msg":"trace[1082371489] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1558; }","duration":"158.065639ms","start":"2023-10-30T23:05:11.943894Z","end":"2023-10-30T23:05:12.10196Z","steps":["trace[1082371489] 'agreement among raft nodes before linearized reading'  (duration: 158.004919ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-30T23:05:12.102249Z","caller":"traceutil/trace.go:171","msg":"trace[2018053576] transaction","detail":"{read_only:false; response_revision:1558; number_of_response:1; }","duration":"134.864908ms","start":"2023-10-30T23:05:11.967377Z","end":"2023-10-30T23:05:12.102242Z","steps":["trace[2018053576] 'process raft request'  (duration: 134.466055ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-30T23:05:14.300764Z","caller":"traceutil/trace.go:171","msg":"trace[2073299966] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"187.251873ms","start":"2023-10-30T23:05:14.113498Z","end":"2023-10-30T23:05:14.30075Z","steps":["trace[2073299966] 'process raft request'  (duration: 187.10106ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-30T23:05:14.545447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.006174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-30T23:05:14.545521Z","caller":"traceutil/trace.go:171","msg":"trace[187902847] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1560; }","duration":"139.096402ms","start":"2023-10-30T23:05:14.406413Z","end":"2023-10-30T23:05:14.545509Z","steps":["trace[187902847] 'range keys from in-memory index tree'  (duration: 138.930291ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [085daaa6f86deb60711834720078848a3a19b108514a63509b8fc4f74954f795] <==
	* 2023/10/30 23:04:27 GCP Auth Webhook started!
	2023/10/30 23:04:31 Ready to marshal response ...
	2023/10/30 23:04:31 Ready to write response ...
	2023/10/30 23:04:31 Ready to marshal response ...
	2023/10/30 23:04:31 Ready to write response ...
	2023/10/30 23:04:40 Ready to marshal response ...
	2023/10/30 23:04:40 Ready to write response ...
	2023/10/30 23:04:41 Ready to marshal response ...
	2023/10/30 23:04:41 Ready to write response ...
	2023/10/30 23:04:47 Ready to marshal response ...
	2023/10/30 23:04:47 Ready to write response ...
	2023/10/30 23:04:58 Ready to marshal response ...
	2023/10/30 23:04:58 Ready to write response ...
	2023/10/30 23:04:58 Ready to marshal response ...
	2023/10/30 23:04:58 Ready to write response ...
	2023/10/30 23:04:58 Ready to marshal response ...
	2023/10/30 23:04:58 Ready to write response ...
	2023/10/30 23:04:59 Ready to marshal response ...
	2023/10/30 23:04:59 Ready to write response ...
	2023/10/30 23:05:08 Ready to marshal response ...
	2023/10/30 23:05:08 Ready to write response ...
	2023/10/30 23:05:25 Ready to marshal response ...
	2023/10/30 23:05:25 Ready to write response ...
	2023/10/30 23:07:25 Ready to marshal response ...
	2023/10/30 23:07:25 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:07:37 up 5 min,  0 users,  load average: 1.78, 1.84, 0.94
	Linux addons-780757 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [98736f03fbd170395201c4931725f2f10ec1db829d51e542618c0f1a4b08fc11] <==
	* I1030 23:04:51.814747       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1030 23:04:52.850199       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1030 23:04:57.380263       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1030 23:04:58.441176       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.19.31"}
	I1030 23:04:59.642451       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1030 23:04:59.911887       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.61.61"}
	I1030 23:05:21.806374       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1030 23:05:43.891232       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 23:05:43.891300       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 23:05:43.906360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 23:05:43.906468       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 23:05:43.932534       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 23:05:43.932609       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 23:05:43.943491       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 23:05:43.943627       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 23:05:43.958591       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 23:05:43.958686       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 23:05:43.968906       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 23:05:43.968976       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1030 23:05:43.978816       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1030 23:05:43.978884       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1030 23:05:44.958962       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1030 23:05:44.969829       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1030 23:05:45.001810       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1030 23:07:25.661749       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.34.239"}
	
	* 
	* ==> kube-controller-manager [114427d98258ff5391a4c9add9f6fad9ccb7cdb8aa2604a3e0ed1d1395b40732] <==
	* E1030 23:06:18.792387       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1030 23:06:27.456149       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 23:06:27.456235       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1030 23:06:27.681897       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 23:06:27.682074       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1030 23:06:33.146092       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 23:06:33.146263       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1030 23:06:56.921575       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 23:06:56.921819       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1030 23:07:04.668243       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 23:07:04.668319       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1030 23:07:06.239277       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 23:07:06.239397       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1030 23:07:22.479617       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1030 23:07:22.479804       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1030 23:07:25.397760       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1030 23:07:25.434329       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-mdrvn"
	I1030 23:07:25.441907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.100358ms"
	I1030 23:07:25.480136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.15744ms"
	I1030 23:07:25.480288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.769µs"
	I1030 23:07:28.513294       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1030 23:07:28.519494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.029µs"
	I1030 23:07:28.522189       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1030 23:07:28.725164       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.880156ms"
	I1030 23:07:28.725551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.382µs"
	
	* 
	* ==> kube-proxy [fa714da1d92efa5e071dc8f80069201d934d886a8150fac559a24e96a974ff5f] <==
	* I1030 23:03:13.321884       1 server_others.go:69] "Using iptables proxy"
	I1030 23:03:13.384453       1 node.go:141] Successfully retrieved node IP: 192.168.39.172
	I1030 23:03:13.893472       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1030 23:03:13.898417       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 23:03:14.053847       1 server_others.go:152] "Using iptables Proxier"
	I1030 23:03:14.053915       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1030 23:03:14.058807       1 server.go:846] "Version info" version="v1.28.3"
	I1030 23:03:14.058848       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 23:03:14.083368       1 config.go:188] "Starting service config controller"
	I1030 23:03:14.083451       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1030 23:03:14.083547       1 config.go:97] "Starting endpoint slice config controller"
	I1030 23:03:14.083600       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1030 23:03:14.111896       1 config.go:315] "Starting node config controller"
	I1030 23:03:14.119952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1030 23:03:14.284211       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1030 23:03:14.284302       1 shared_informer.go:318] Caches are synced for service config
	I1030 23:03:14.323273       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f08781bf7e68bacc5bc7d81d0719dcd11eb80b4972a90a02426bf48a0d9b151f] <==
	* W1030 23:02:45.084272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1030 23:02:45.084286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1030 23:02:45.917616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1030 23:02:45.917671       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1030 23:02:45.919936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1030 23:02:45.920039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1030 23:02:45.969973       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 23:02:45.970061       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 23:02:46.073409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1030 23:02:46.073500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1030 23:02:46.076855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1030 23:02:46.076930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1030 23:02:46.111072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1030 23:02:46.111176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1030 23:02:46.133261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1030 23:02:46.133395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1030 23:02:46.141229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1030 23:02:46.141335       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1030 23:02:46.166143       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1030 23:02:46.166165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1030 23:02:46.186256       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1030 23:02:46.186400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1030 23:02:46.228849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1030 23:02:46.228923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1030 23:02:48.856450       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-30 23:02:16 UTC, ends at Mon 2023-10-30 23:07:37 UTC. --
	Oct 30 23:07:25 addons-780757 kubelet[1243]: I1030 23:07:25.447921    1243 memory_manager.go:346] "RemoveStaleState removing state" podUID="5e89bfbd-f218-4afc-80e7-ca412afa1b3f" containerName="csi-attacher"
	Oct 30 23:07:25 addons-780757 kubelet[1243]: I1030 23:07:25.447926    1243 memory_manager.go:346] "RemoveStaleState removing state" podUID="d81809ea-7714-417d-b595-f4baba970354" containerName="csi-provisioner"
	Oct 30 23:07:25 addons-780757 kubelet[1243]: I1030 23:07:25.447932    1243 memory_manager.go:346] "RemoveStaleState removing state" podUID="d81809ea-7714-417d-b595-f4baba970354" containerName="csi-external-health-monitor-controller"
	Oct 30 23:07:25 addons-780757 kubelet[1243]: I1030 23:07:25.447938    1243 memory_manager.go:346] "RemoveStaleState removing state" podUID="d81809ea-7714-417d-b595-f4baba970354" containerName="liveness-probe"
	Oct 30 23:07:25 addons-780757 kubelet[1243]: I1030 23:07:25.508846    1243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxfx\" (UniqueName: \"kubernetes.io/projected/b2bb812e-fecf-415f-ab23-9980029eb82a-kube-api-access-xlxfx\") pod \"hello-world-app-5d77478584-mdrvn\" (UID: \"b2bb812e-fecf-415f-ab23-9980029eb82a\") " pod="default/hello-world-app-5d77478584-mdrvn"
	Oct 30 23:07:25 addons-780757 kubelet[1243]: I1030 23:07:25.508937    1243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b2bb812e-fecf-415f-ab23-9980029eb82a-gcp-creds\") pod \"hello-world-app-5d77478584-mdrvn\" (UID: \"b2bb812e-fecf-415f-ab23-9980029eb82a\") " pod="default/hello-world-app-5d77478584-mdrvn"
	Oct 30 23:07:26 addons-780757 kubelet[1243]: I1030 23:07:26.818202    1243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnqsh\" (UniqueName: \"kubernetes.io/projected/7d8b030a-8d89-47c3-87b3-fa6b3676c8ce-kube-api-access-tnqsh\") pod \"7d8b030a-8d89-47c3-87b3-fa6b3676c8ce\" (UID: \"7d8b030a-8d89-47c3-87b3-fa6b3676c8ce\") "
	Oct 30 23:07:26 addons-780757 kubelet[1243]: I1030 23:07:26.820602    1243 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8b030a-8d89-47c3-87b3-fa6b3676c8ce-kube-api-access-tnqsh" (OuterVolumeSpecName: "kube-api-access-tnqsh") pod "7d8b030a-8d89-47c3-87b3-fa6b3676c8ce" (UID: "7d8b030a-8d89-47c3-87b3-fa6b3676c8ce"). InnerVolumeSpecName "kube-api-access-tnqsh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 30 23:07:26 addons-780757 kubelet[1243]: I1030 23:07:26.919468    1243 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tnqsh\" (UniqueName: \"kubernetes.io/projected/7d8b030a-8d89-47c3-87b3-fa6b3676c8ce-kube-api-access-tnqsh\") on node \"addons-780757\" DevicePath \"\""
	Oct 30 23:07:27 addons-780757 kubelet[1243]: I1030 23:07:27.683496    1243 scope.go:117] "RemoveContainer" containerID="4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a"
	Oct 30 23:07:27 addons-780757 kubelet[1243]: I1030 23:07:27.881057    1243 scope.go:117] "RemoveContainer" containerID="4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a"
	Oct 30 23:07:27 addons-780757 kubelet[1243]: E1030 23:07:27.881700    1243 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a\": container with ID starting with 4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a not found: ID does not exist" containerID="4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a"
	Oct 30 23:07:27 addons-780757 kubelet[1243]: I1030 23:07:27.881756    1243 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a"} err="failed to get container status \"4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a\": rpc error: code = NotFound desc = could not find container \"4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a\": container with ID starting with 4ae3fec9773f8425a834dd72551d0be15f89920b069c8d5c395321594f06093a not found: ID does not exist"
	Oct 30 23:07:28 addons-780757 kubelet[1243]: I1030 23:07:28.452209    1243 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-5dd5756b68-vnckz" secret="" err="secret \"gcp-auth\" not found"
	Oct 30 23:07:28 addons-780757 kubelet[1243]: I1030 23:07:28.455485    1243 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7d8b030a-8d89-47c3-87b3-fa6b3676c8ce" path="/var/lib/kubelet/pods/7d8b030a-8d89-47c3-87b3-fa6b3676c8ce/volumes"
	Oct 30 23:07:30 addons-780757 kubelet[1243]: I1030 23:07:30.459469    1243 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b0ff8da9-cff2-400c-ba7c-81cd96487427" path="/var/lib/kubelet/pods/b0ff8da9-cff2-400c-ba7c-81cd96487427/volumes"
	Oct 30 23:07:30 addons-780757 kubelet[1243]: I1030 23:07:30.460405    1243 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c66d6cf5-5613-4b31-979d-432e1e30c812" path="/var/lib/kubelet/pods/c66d6cf5-5613-4b31-979d-432e1e30c812/volumes"
	Oct 30 23:07:31 addons-780757 kubelet[1243]: I1030 23:07:31.866682    1243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1df1ffb0-fb14-4c71-9f73-24a74a0433de-webhook-cert\") pod \"1df1ffb0-fb14-4c71-9f73-24a74a0433de\" (UID: \"1df1ffb0-fb14-4c71-9f73-24a74a0433de\") "
	Oct 30 23:07:31 addons-780757 kubelet[1243]: I1030 23:07:31.866724    1243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z96dm\" (UniqueName: \"kubernetes.io/projected/1df1ffb0-fb14-4c71-9f73-24a74a0433de-kube-api-access-z96dm\") pod \"1df1ffb0-fb14-4c71-9f73-24a74a0433de\" (UID: \"1df1ffb0-fb14-4c71-9f73-24a74a0433de\") "
	Oct 30 23:07:31 addons-780757 kubelet[1243]: I1030 23:07:31.877724    1243 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df1ffb0-fb14-4c71-9f73-24a74a0433de-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1df1ffb0-fb14-4c71-9f73-24a74a0433de" (UID: "1df1ffb0-fb14-4c71-9f73-24a74a0433de"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 30 23:07:31 addons-780757 kubelet[1243]: I1030 23:07:31.878068    1243 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df1ffb0-fb14-4c71-9f73-24a74a0433de-kube-api-access-z96dm" (OuterVolumeSpecName: "kube-api-access-z96dm") pod "1df1ffb0-fb14-4c71-9f73-24a74a0433de" (UID: "1df1ffb0-fb14-4c71-9f73-24a74a0433de"). InnerVolumeSpecName "kube-api-access-z96dm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 30 23:07:31 addons-780757 kubelet[1243]: I1030 23:07:31.967878    1243 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z96dm\" (UniqueName: \"kubernetes.io/projected/1df1ffb0-fb14-4c71-9f73-24a74a0433de-kube-api-access-z96dm\") on node \"addons-780757\" DevicePath \"\""
	Oct 30 23:07:31 addons-780757 kubelet[1243]: I1030 23:07:31.967902    1243 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1df1ffb0-fb14-4c71-9f73-24a74a0433de-webhook-cert\") on node \"addons-780757\" DevicePath \"\""
	Oct 30 23:07:32 addons-780757 kubelet[1243]: I1030 23:07:32.454300    1243 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1df1ffb0-fb14-4c71-9f73-24a74a0433de" path="/var/lib/kubelet/pods/1df1ffb0-fb14-4c71-9f73-24a74a0433de/volumes"
	Oct 30 23:07:32 addons-780757 kubelet[1243]: I1030 23:07:32.719927    1243 scope.go:117] "RemoveContainer" containerID="f681e0f094caefe039442e27f21ea75ba6100bd295942e2c4bc0d3c7108c5a92"
	
	* 
	* ==> storage-provisioner [4b19a1e9fcd5453bdcbcaccbfb19006f74b36dbf8c42e6004662b7d51b45baac] <==
	* I1030 23:03:20.468478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 23:03:20.577736       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 23:03:20.577914       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 23:03:20.637415       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 23:03:20.641414       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-780757_7c5bd384-5f2a-431c-84a5-aebc105a5022!
	I1030 23:03:20.693231       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a8ab72d-2d40-4c6a-ab6b-4db8baa60531", APIVersion:"v1", ResourceVersion:"864", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-780757_7c5bd384-5f2a-431c-84a5-aebc105a5022 became leader
	I1030 23:03:20.842856       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-780757_7c5bd384-5f2a-431c-84a5-aebc105a5022!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-780757 -n addons-780757
helpers_test.go:261: (dbg) Run:  kubectl --context addons-780757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (158.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-780757
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-780757: exit status 82 (2m1.223126324s)

                                                
                                                
-- stdout --
	* Stopping node "addons-780757"  ...
	* Stopping node "addons-780757"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-780757" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-780757
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-780757: exit status 11 (21.570141427s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-780757" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-780757
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-780757: exit status 11 (6.144559302s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-780757" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-780757
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-780757: exit status 11 (6.142980059s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-780757" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image ls --format table --alsologtostderr: (2.529543637s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167609 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167609 image ls --format table --alsologtostderr:
I1030 23:14:53.195454  224214 out.go:296] Setting OutFile to fd 1 ...
I1030 23:14:53.195725  224214 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:53.195735  224214 out.go:309] Setting ErrFile to fd 2...
I1030 23:14:53.195740  224214 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:53.195928  224214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
I1030 23:14:53.196511  224214 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:53.196613  224214 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:53.196934  224214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:53.197021  224214 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:53.211466  224214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35271
I1030 23:14:53.211935  224214 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:53.212610  224214 main.go:141] libmachine: Using API Version  1
I1030 23:14:53.212644  224214 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:53.213087  224214 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:53.213375  224214 main.go:141] libmachine: (functional-167609) Calling .GetState
I1030 23:14:53.215302  224214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:53.215343  224214 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:53.230393  224214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
I1030 23:14:53.230954  224214 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:53.231469  224214 main.go:141] libmachine: Using API Version  1
I1030 23:14:53.231508  224214 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:53.231844  224214 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:53.232009  224214 main.go:141] libmachine: (functional-167609) Calling .DriverName
I1030 23:14:53.232236  224214 ssh_runner.go:195] Run: systemctl --version
I1030 23:14:53.232271  224214 main.go:141] libmachine: (functional-167609) Calling .GetSSHHostname
I1030 23:14:53.235146  224214 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:53.235582  224214 main.go:141] libmachine: (functional-167609) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:26:d8", ip: ""} in network mk-functional-167609: {Iface:virbr1 ExpiryTime:2023-10-31 00:11:40 +0000 UTC Type:0 Mac:52:54:00:ab:26:d8 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:functional-167609 Clientid:01:52:54:00:ab:26:d8}
I1030 23:14:53.235618  224214 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined IP address 192.168.50.211 and MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:53.235732  224214 main.go:141] libmachine: (functional-167609) Calling .GetSSHPort
I1030 23:14:53.235929  224214 main.go:141] libmachine: (functional-167609) Calling .GetSSHKeyPath
I1030 23:14:53.236072  224214 main.go:141] libmachine: (functional-167609) Calling .GetSSHUsername
I1030 23:14:53.236187  224214 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/functional-167609/id_rsa Username:docker}
I1030 23:14:53.377814  224214 ssh_runner.go:195] Run: sudo crictl images --output json
I1030 23:14:55.479362  224214 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.101512606s)
W1030 23:14:55.479441  224214 cache_images.go:715] Failed to list images for profile functional-167609 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1030 23:14:55.472183    8812 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2023-10-30T23:14:55Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I1030 23:14:55.479576  224214 main.go:141] libmachine: Making call to close driver server
I1030 23:14:55.479589  224214 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:55.479877  224214 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:55.479899  224214 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 23:14:55.479910  224214 main.go:141] libmachine: Making call to close driver server
I1030 23:14:55.479919  224214 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:55.480164  224214 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:55.480180  224214 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 23:14:55.480196  224214 main.go:141] libmachine: (functional-167609) DBG | Closing plugin on server side
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image ls --format json --alsologtostderr: (2.533698576s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167609 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167609 image ls --format json --alsologtostderr:
I1030 23:14:53.168499  224204 out.go:296] Setting OutFile to fd 1 ...
I1030 23:14:53.168673  224204 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:53.168687  224204 out.go:309] Setting ErrFile to fd 2...
I1030 23:14:53.168694  224204 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:53.169025  224204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
I1030 23:14:53.169815  224204 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:53.169974  224204 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:53.170503  224204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:53.170560  224204 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:53.187419  224204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42231
I1030 23:14:53.187910  224204 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:53.188546  224204 main.go:141] libmachine: Using API Version  1
I1030 23:14:53.188575  224204 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:53.188920  224204 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:53.189155  224204 main.go:141] libmachine: (functional-167609) Calling .GetState
I1030 23:14:53.190975  224204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:53.191017  224204 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:53.207157  224204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
I1030 23:14:53.207647  224204 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:53.208296  224204 main.go:141] libmachine: Using API Version  1
I1030 23:14:53.208359  224204 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:53.208794  224204 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:53.209061  224204 main.go:141] libmachine: (functional-167609) Calling .DriverName
I1030 23:14:53.209289  224204 ssh_runner.go:195] Run: systemctl --version
I1030 23:14:53.209323  224204 main.go:141] libmachine: (functional-167609) Calling .GetSSHHostname
I1030 23:14:53.212961  224204 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:53.213438  224204 main.go:141] libmachine: (functional-167609) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:26:d8", ip: ""} in network mk-functional-167609: {Iface:virbr1 ExpiryTime:2023-10-31 00:11:40 +0000 UTC Type:0 Mac:52:54:00:ab:26:d8 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:functional-167609 Clientid:01:52:54:00:ab:26:d8}
I1030 23:14:53.213489  224204 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined IP address 192.168.50.211 and MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:53.213652  224204 main.go:141] libmachine: (functional-167609) Calling .GetSSHPort
I1030 23:14:53.213801  224204 main.go:141] libmachine: (functional-167609) Calling .GetSSHKeyPath
I1030 23:14:53.213932  224204 main.go:141] libmachine: (functional-167609) Calling .GetSSHUsername
I1030 23:14:53.214067  224204 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/functional-167609/id_rsa Username:docker}
I1030 23:14:53.339924  224204 ssh_runner.go:195] Run: sudo crictl images --output json
I1030 23:14:55.457646  224204 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.117680119s)
W1030 23:14:55.457730  224204 cache_images.go:715] Failed to list images for profile functional-167609 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1030 23:14:55.449848    8802 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2023-10-30T23:14:55Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I1030 23:14:55.457874  224204 main.go:141] libmachine: Making call to close driver server
I1030 23:14:55.457888  224204 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:55.458236  224204 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:55.458258  224204 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 23:14:55.458275  224204 main.go:141] libmachine: Making call to close driver server
I1030 23:14:55.458276  224204 main.go:141] libmachine: (functional-167609) DBG | Closing plugin on server side
I1030 23:14:55.458293  224204 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:55.458613  224204 main.go:141] libmachine: (functional-167609) DBG | Closing plugin on server side
I1030 23:14:55.458711  224204 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:55.458755  224204 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 ssh pgrep buildkitd: exit status 1 (260.77717ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image build -t localhost/my-image:functional-167609 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image build -t localhost/my-image:functional-167609 testdata/build --alsologtostderr: (3.615661937s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167609 image build -t localhost/my-image:functional-167609 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8ddb651069f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-167609
--> b713ef851ba
Successfully tagged localhost/my-image:functional-167609
b713ef851bad66751ef8362794552632082696d287a7d75b8c82ff268bb2e341
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167609 image build -t localhost/my-image:functional-167609 testdata/build --alsologtostderr:
I1030 23:14:47.094991  224133 out.go:296] Setting OutFile to fd 1 ...
I1030 23:14:47.095336  224133 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:47.095351  224133 out.go:309] Setting ErrFile to fd 2...
I1030 23:14:47.095359  224133 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:47.095674  224133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
I1030 23:14:47.096543  224133 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:47.097219  224133 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:47.097672  224133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:47.097749  224133 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:47.113207  224133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
I1030 23:14:47.113674  224133 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:47.114323  224133 main.go:141] libmachine: Using API Version  1
I1030 23:14:47.114354  224133 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:47.114707  224133 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:47.114901  224133 main.go:141] libmachine: (functional-167609) Calling .GetState
I1030 23:14:47.116824  224133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:47.116863  224133 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:47.131713  224133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34927
I1030 23:14:47.132084  224133 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:47.132646  224133 main.go:141] libmachine: Using API Version  1
I1030 23:14:47.132672  224133 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:47.132992  224133 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:47.133173  224133 main.go:141] libmachine: (functional-167609) Calling .DriverName
I1030 23:14:47.133392  224133 ssh_runner.go:195] Run: systemctl --version
I1030 23:14:47.133427  224133 main.go:141] libmachine: (functional-167609) Calling .GetSSHHostname
I1030 23:14:47.136304  224133 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:47.136745  224133 main.go:141] libmachine: (functional-167609) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:26:d8", ip: ""} in network mk-functional-167609: {Iface:virbr1 ExpiryTime:2023-10-31 00:11:40 +0000 UTC Type:0 Mac:52:54:00:ab:26:d8 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:functional-167609 Clientid:01:52:54:00:ab:26:d8}
I1030 23:14:47.136778  224133 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined IP address 192.168.50.211 and MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:47.136922  224133 main.go:141] libmachine: (functional-167609) Calling .GetSSHPort
I1030 23:14:47.137090  224133 main.go:141] libmachine: (functional-167609) Calling .GetSSHKeyPath
I1030 23:14:47.137266  224133 main.go:141] libmachine: (functional-167609) Calling .GetSSHUsername
I1030 23:14:47.137445  224133 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/functional-167609/id_rsa Username:docker}
I1030 23:14:47.279366  224133 build_images.go:151] Building image from path: /tmp/build.1424250961.tar
I1030 23:14:47.279473  224133 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1030 23:14:47.301682  224133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1424250961.tar
I1030 23:14:47.309271  224133 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1424250961.tar: stat -c "%s %y" /var/lib/minikube/build/build.1424250961.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1424250961.tar': No such file or directory
I1030 23:14:47.309320  224133 ssh_runner.go:362] scp /tmp/build.1424250961.tar --> /var/lib/minikube/build/build.1424250961.tar (3072 bytes)
I1030 23:14:47.394837  224133 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1424250961
I1030 23:14:47.424140  224133 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1424250961 -xf /var/lib/minikube/build/build.1424250961.tar
I1030 23:14:47.456527  224133 crio.go:297] Building image: /var/lib/minikube/build/build.1424250961
I1030 23:14:47.456653  224133 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-167609 /var/lib/minikube/build/build.1424250961 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1030 23:14:50.579687  224133 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-167609 /var/lib/minikube/build/build.1424250961 --cgroup-manager=cgroupfs: (3.122997019s)
I1030 23:14:50.579763  224133 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1424250961
I1030 23:14:50.605083  224133 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1424250961.tar
I1030 23:14:50.631341  224133 build_images.go:207] Built localhost/my-image:functional-167609 from /tmp/build.1424250961.tar
I1030 23:14:50.631376  224133 build_images.go:123] succeeded building to: functional-167609
I1030 23:14:50.631387  224133 build_images.go:124] failed building to: 
I1030 23:14:50.631423  224133 main.go:141] libmachine: Making call to close driver server
I1030 23:14:50.631558  224133 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:50.631866  224133 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:50.631883  224133 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 23:14:50.631894  224133 main.go:141] libmachine: Making call to close driver server
I1030 23:14:50.631902  224133 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:50.632159  224133 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:50.632183  224133 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 23:14:50.632204  224133 main.go:141] libmachine: (functional-167609) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls
E1030 23:14:51.115214  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
2023/10/30 23:14:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image ls: (2.492671786s)
functional_test.go:442: expected "localhost/my-image:functional-167609" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (167.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-371910 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1030 23:17:14.477080  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-371910 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.007200983s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-371910 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-371910 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [11820c82-f3c8-42ca-952a-6f62808c5557] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [11820c82-f3c8-42ca-952a-6f62808c5557] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.016984995s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-371910 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1030 23:19:14.583862  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:14.589227  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:14.599519  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:14.619781  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:14.660039  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:14.740355  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:14.900772  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:15.221434  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:15.862382  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:17.142875  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:19.703143  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:24.823914  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:19:30.630910  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:19:35.064233  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-371910 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.965286022s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-371910 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-371910 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.84
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-371910 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-371910 addons disable ingress-dns --alsologtostderr -v=1: (2.687417284s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-371910 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-371910 addons disable ingress --alsologtostderr -v=1: (7.785043948s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-371910 -n ingress-addon-legacy-371910
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-371910 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-371910 logs -n 25: (1.256333436s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	| mount          | -p functional-167609                                                   | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632199463/001:/mount2 |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |                |                     |                     |
	| mount          | -p functional-167609                                                   | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632199463/001:/mount1 |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |                |                     |                     |
	| ssh            | functional-167609 ssh findmnt                                          | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC |                     |
	|                | -T /mount1                                                             |                             |         |                |                     |                     |
	| ssh            | functional-167609 ssh findmnt                                          | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | -T /mount1                                                             |                             |         |                |                     |                     |
	| ssh            | functional-167609 ssh findmnt                                          | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | -T /mount2                                                             |                             |         |                |                     |                     |
	| ssh            | functional-167609 ssh findmnt                                          | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | -T /mount3                                                             |                             |         |                |                     |                     |
	| mount          | -p functional-167609                                                   | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC |                     |
	|                | --kill=true                                                            |                             |         |                |                     |                     |
	| image          | functional-167609                                                      | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | image ls --format short                                                |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-167609                                                      | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | image ls --format yaml                                                 |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| ssh            | functional-167609 ssh pgrep                                            | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC |                     |
	|                | buildkitd                                                              |                             |         |                |                     |                     |
	| image          | functional-167609 image build -t                                       | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | localhost/my-image:functional-167609                                   |                             |         |                |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |                |                     |                     |
	| image          | functional-167609 image ls                                             | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	| image          | functional-167609                                                      | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | image ls --format json                                                 |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-167609                                                      | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | image ls --format table                                                |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| update-context | functional-167609                                                      | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| update-context | functional-167609                                                      | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| update-context | functional-167609                                                      | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:14 UTC | 30 Oct 23 23:14 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| delete         | -p functional-167609                                                   | functional-167609           | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:15 UTC | 30 Oct 23 23:15 UTC |
	| start          | -p ingress-addon-legacy-371910                                         | ingress-addon-legacy-371910 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:15 UTC | 30 Oct 23 23:16 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |                |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |                |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-371910                                            | ingress-addon-legacy-371910 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:16 UTC | 30 Oct 23 23:17 UTC |
	|                | addons enable ingress                                                  |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-371910                                            | ingress-addon-legacy-371910 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:17 UTC | 30 Oct 23 23:17 UTC |
	|                | addons enable ingress-dns                                              |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |                |                     |                     |
	| ssh            | ingress-addon-legacy-371910                                            | ingress-addon-legacy-371910 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:17 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |                |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |                |                     |                     |
	| ip             | ingress-addon-legacy-371910 ip                                         | ingress-addon-legacy-371910 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:19 UTC | 30 Oct 23 23:19 UTC |
	| addons         | ingress-addon-legacy-371910                                            | ingress-addon-legacy-371910 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:19 UTC | 30 Oct 23 23:19 UTC |
	|                | addons disable ingress-dns                                             |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-371910                                            | ingress-addon-legacy-371910 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:19 UTC | 30 Oct 23 23:19 UTC |
	|                | addons disable ingress                                                 |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |                |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/30 23:15:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 23:15:10.870353  224508 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:15:10.870633  224508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:15:10.870645  224508 out.go:309] Setting ErrFile to fd 2...
	I1030 23:15:10.870652  224508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:15:10.870887  224508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:15:10.871546  224508 out.go:303] Setting JSON to false
	I1030 23:15:10.872430  224508 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25063,"bootTime":1698682648,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:15:10.872506  224508 start.go:138] virtualization: kvm guest
	I1030 23:15:10.874874  224508 out.go:177] * [ingress-addon-legacy-371910] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:15:10.876513  224508 out.go:177]   - MINIKUBE_LOCATION=17527
	I1030 23:15:10.876523  224508 notify.go:220] Checking for updates...
	I1030 23:15:10.878131  224508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:15:10.879690  224508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:15:10.881169  224508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:15:10.882457  224508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 23:15:10.883901  224508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 23:15:10.885549  224508 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:15:10.920008  224508 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 23:15:10.921471  224508 start.go:298] selected driver: kvm2
	I1030 23:15:10.921488  224508 start.go:902] validating driver "kvm2" against <nil>
	I1030 23:15:10.921500  224508 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 23:15:10.922237  224508 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:15:10.922319  224508 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 23:15:10.936621  224508 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1030 23:15:10.936682  224508 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1030 23:15:10.936997  224508 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 23:15:10.937078  224508 cni.go:84] Creating CNI manager for ""
	I1030 23:15:10.937094  224508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 23:15:10.937113  224508 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 23:15:10.937128  224508 start_flags.go:323] config:
	{Name:ingress-addon-legacy-371910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-371910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:15:10.937294  224508 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:15:10.939088  224508 out.go:177] * Starting control plane node ingress-addon-legacy-371910 in cluster ingress-addon-legacy-371910
	I1030 23:15:10.940274  224508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1030 23:15:10.960717  224508 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1030 23:15:10.960745  224508 cache.go:56] Caching tarball of preloaded images
	I1030 23:15:10.960891  224508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1030 23:15:10.962564  224508 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1030 23:15:10.963790  224508 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1030 23:15:10.992633  224508 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1030 23:15:18.696885  224508 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1030 23:15:18.697023  224508 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1030 23:15:19.694198  224508 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1030 23:15:19.694572  224508 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/config.json ...
	I1030 23:15:19.694609  224508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/config.json: {Name:mk1d16c65232e1111b83f543d3ab49058542ac6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:15:19.694799  224508 start.go:365] acquiring machines lock for ingress-addon-legacy-371910: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:15:19.694831  224508 start.go:369] acquired machines lock for "ingress-addon-legacy-371910" in 17.887µs
	I1030 23:15:19.694850  224508 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-371910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-371910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 23:15:19.694941  224508 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 23:15:19.697314  224508 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1030 23:15:19.697466  224508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:15:19.697520  224508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:15:19.712116  224508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I1030 23:15:19.712563  224508 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:15:19.713111  224508 main.go:141] libmachine: Using API Version  1
	I1030 23:15:19.713135  224508 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:15:19.713468  224508 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:15:19.713663  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetMachineName
	I1030 23:15:19.713838  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:15:19.714002  224508 start.go:159] libmachine.API.Create for "ingress-addon-legacy-371910" (driver="kvm2")
	I1030 23:15:19.714028  224508 client.go:168] LocalClient.Create starting
	I1030 23:15:19.714055  224508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem
	I1030 23:15:19.714085  224508 main.go:141] libmachine: Decoding PEM data...
	I1030 23:15:19.714102  224508 main.go:141] libmachine: Parsing certificate...
	I1030 23:15:19.714167  224508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem
	I1030 23:15:19.714186  224508 main.go:141] libmachine: Decoding PEM data...
	I1030 23:15:19.714197  224508 main.go:141] libmachine: Parsing certificate...
	I1030 23:15:19.714214  224508 main.go:141] libmachine: Running pre-create checks...
	I1030 23:15:19.714223  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .PreCreateCheck
	I1030 23:15:19.714590  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetConfigRaw
	I1030 23:15:19.715010  224508 main.go:141] libmachine: Creating machine...
	I1030 23:15:19.715027  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .Create
	I1030 23:15:19.715149  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Creating KVM machine...
	I1030 23:15:19.716451  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found existing default KVM network
	I1030 23:15:19.717221  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:19.717071  224553 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001478d0}
	I1030 23:15:19.722355  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | trying to create private KVM network mk-ingress-addon-legacy-371910 192.168.39.0/24...
	I1030 23:15:19.790782  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | private KVM network mk-ingress-addon-legacy-371910 192.168.39.0/24 created
	I1030 23:15:19.790830  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Setting up store path in /home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910 ...
	I1030 23:15:19.790852  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:19.790750  224553 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:15:19.790868  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Building disk image from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso
	I1030 23:15:19.790892  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Downloading /home/jenkins/minikube-integration/17527-208817/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso...
	I1030 23:15:20.020542  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:20.020417  224553 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa...
	I1030 23:15:20.106155  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:20.105989  224553 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/ingress-addon-legacy-371910.rawdisk...
	I1030 23:15:20.106189  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Writing magic tar header
	I1030 23:15:20.106202  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Writing SSH key tar header
	I1030 23:15:20.106212  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:20.106115  224553 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910 ...
	I1030 23:15:20.106231  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910
	I1030 23:15:20.106293  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910 (perms=drwx------)
	I1030 23:15:20.106332  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines
	I1030 23:15:20.106344  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines (perms=drwxr-xr-x)
	I1030 23:15:20.106352  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:15:20.106364  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817
	I1030 23:15:20.106371  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 23:15:20.106381  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Checking permissions on dir: /home/jenkins
	I1030 23:15:20.106389  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Checking permissions on dir: /home
	I1030 23:15:20.106398  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Skipping /home - not owner
	I1030 23:15:20.106474  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube (perms=drwxr-xr-x)
	I1030 23:15:20.106514  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817 (perms=drwxrwxr-x)
	I1030 23:15:20.106538  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 23:15:20.106557  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 23:15:20.106594  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Creating domain...
	I1030 23:15:20.107629  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) define libvirt domain using xml: 
	I1030 23:15:20.107640  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) <domain type='kvm'>
	I1030 23:15:20.107648  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   <name>ingress-addon-legacy-371910</name>
	I1030 23:15:20.107653  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   <memory unit='MiB'>4096</memory>
	I1030 23:15:20.107660  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   <vcpu>2</vcpu>
	I1030 23:15:20.107665  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   <features>
	I1030 23:15:20.107671  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <acpi/>
	I1030 23:15:20.107676  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <apic/>
	I1030 23:15:20.107682  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <pae/>
	I1030 23:15:20.107687  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     
	I1030 23:15:20.107700  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   </features>
	I1030 23:15:20.107719  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   <cpu mode='host-passthrough'>
	I1030 23:15:20.107729  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   
	I1030 23:15:20.107745  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   </cpu>
	I1030 23:15:20.107759  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   <os>
	I1030 23:15:20.107786  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <type>hvm</type>
	I1030 23:15:20.107802  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <boot dev='cdrom'/>
	I1030 23:15:20.107823  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <boot dev='hd'/>
	I1030 23:15:20.107838  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <bootmenu enable='no'/>
	I1030 23:15:20.107850  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   </os>
	I1030 23:15:20.107863  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   <devices>
	I1030 23:15:20.107875  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <disk type='file' device='cdrom'>
	I1030 23:15:20.107902  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/boot2docker.iso'/>
	I1030 23:15:20.107938  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <target dev='hdc' bus='scsi'/>
	I1030 23:15:20.107952  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <readonly/>
	I1030 23:15:20.107964  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     </disk>
	I1030 23:15:20.107978  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <disk type='file' device='disk'>
	I1030 23:15:20.107993  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 23:15:20.108023  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/ingress-addon-legacy-371910.rawdisk'/>
	I1030 23:15:20.108039  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <target dev='hda' bus='virtio'/>
	I1030 23:15:20.108048  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     </disk>
	I1030 23:15:20.108056  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <interface type='network'>
	I1030 23:15:20.108066  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <source network='mk-ingress-addon-legacy-371910'/>
	I1030 23:15:20.108074  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <model type='virtio'/>
	I1030 23:15:20.108083  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     </interface>
	I1030 23:15:20.108091  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <interface type='network'>
	I1030 23:15:20.108100  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <source network='default'/>
	I1030 23:15:20.108111  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <model type='virtio'/>
	I1030 23:15:20.108123  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     </interface>
	I1030 23:15:20.108131  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <serial type='pty'>
	I1030 23:15:20.108137  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <target port='0'/>
	I1030 23:15:20.108145  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     </serial>
	I1030 23:15:20.108152  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <console type='pty'>
	I1030 23:15:20.108160  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <target type='serial' port='0'/>
	I1030 23:15:20.108169  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     </console>
	I1030 23:15:20.108177  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     <rng model='virtio'>
	I1030 23:15:20.108206  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)       <backend model='random'>/dev/random</backend>
	I1030 23:15:20.108234  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     </rng>
	I1030 23:15:20.108252  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     
	I1030 23:15:20.108270  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)     
	I1030 23:15:20.108284  224508 main.go:141] libmachine: (ingress-addon-legacy-371910)   </devices>
	I1030 23:15:20.108296  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) </domain>
	I1030 23:15:20.108311  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) 
	I1030 23:15:20.112376  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:95:59:24 in network default
	I1030 23:15:20.112957  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Ensuring networks are active...
	I1030 23:15:20.112988  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:20.113617  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Ensuring network default is active
	I1030 23:15:20.113925  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Ensuring network mk-ingress-addon-legacy-371910 is active
	I1030 23:15:20.114379  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Getting domain xml...
	I1030 23:15:20.115062  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Creating domain...
	I1030 23:15:21.341911  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Waiting to get IP...
	I1030 23:15:21.342691  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:21.343065  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:21.343099  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:21.343041  224553 retry.go:31] will retry after 261.650203ms: waiting for machine to come up
	I1030 23:15:21.606546  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:21.607031  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:21.607066  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:21.606975  224553 retry.go:31] will retry after 363.784951ms: waiting for machine to come up
	I1030 23:15:21.972604  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:21.973184  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:21.973217  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:21.973128  224553 retry.go:31] will retry after 346.098957ms: waiting for machine to come up
	I1030 23:15:22.320419  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:22.320763  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:22.320802  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:22.320697  224553 retry.go:31] will retry after 419.058439ms: waiting for machine to come up
	I1030 23:15:22.741371  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:22.741897  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:22.741923  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:22.741848  224553 retry.go:31] will retry after 517.041577ms: waiting for machine to come up
	I1030 23:15:23.260533  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:23.260901  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:23.260945  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:23.260840  224553 retry.go:31] will retry after 811.808438ms: waiting for machine to come up
	I1030 23:15:24.073737  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:24.074095  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:24.074119  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:24.074046  224553 retry.go:31] will retry after 1.09950033s: waiting for machine to come up
	I1030 23:15:25.174803  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:25.175254  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:25.175291  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:25.175192  224553 retry.go:31] will retry after 1.211743467s: waiting for machine to come up
	I1030 23:15:26.388533  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:26.388880  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:26.388915  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:26.388822  224553 retry.go:31] will retry after 1.643500768s: waiting for machine to come up
	I1030 23:15:28.033782  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:28.034239  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:28.034266  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:28.034193  224553 retry.go:31] will retry after 1.419421086s: waiting for machine to come up
	I1030 23:15:29.454888  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:29.455301  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:29.455336  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:29.455231  224553 retry.go:31] will retry after 1.760635469s: waiting for machine to come up
	I1030 23:15:31.217748  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:31.218114  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:31.218146  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:31.218057  224553 retry.go:31] will retry after 2.989383763s: waiting for machine to come up
	I1030 23:15:34.208779  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:34.209115  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:34.209145  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:34.209090  224553 retry.go:31] will retry after 3.004616397s: waiting for machine to come up
	I1030 23:15:37.215265  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:37.215688  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find current IP address of domain ingress-addon-legacy-371910 in network mk-ingress-addon-legacy-371910
	I1030 23:15:37.215726  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | I1030 23:15:37.215645  224553 retry.go:31] will retry after 5.579664704s: waiting for machine to come up
	I1030 23:15:42.798541  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:42.799015  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has current primary IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:42.799053  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Found IP for machine: 192.168.39.84
	I1030 23:15:42.799078  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Reserving static IP address...
	I1030 23:15:42.799390  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-371910", mac: "52:54:00:df:59:00", ip: "192.168.39.84"} in network mk-ingress-addon-legacy-371910
	I1030 23:15:42.871929  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Getting to WaitForSSH function...
	I1030 23:15:42.871983  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Reserved static IP address: 192.168.39.84
	I1030 23:15:42.871999  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Waiting for SSH to be available...
	I1030 23:15:42.874796  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:42.875176  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:59:00}
	I1030 23:15:42.875214  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:42.875325  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Using SSH client type: external
	I1030 23:15:42.875356  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa (-rw-------)
	I1030 23:15:42.875414  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 23:15:42.875441  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | About to run SSH command:
	I1030 23:15:42.875455  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | exit 0
	I1030 23:15:42.964551  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | SSH cmd err, output: <nil>: 
	I1030 23:15:42.964830  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) KVM machine creation complete!
	I1030 23:15:42.965224  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetConfigRaw
	I1030 23:15:42.965875  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:15:42.966076  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:15:42.966225  224508 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 23:15:42.966247  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetState
	I1030 23:15:42.967480  224508 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 23:15:42.967498  224508 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 23:15:42.967504  224508 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 23:15:42.967512  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:42.969613  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:42.969977  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:42.970048  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:42.970106  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:42.970283  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:42.970436  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:42.970583  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:42.970762  224508 main.go:141] libmachine: Using SSH client type: native
	I1030 23:15:42.971107  224508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1030 23:15:42.971119  224508 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 23:15:43.087703  224508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:15:43.087729  224508 main.go:141] libmachine: Detecting the provisioner...
	I1030 23:15:43.087738  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:43.090686  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.091047  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:43.091083  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.091223  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:43.091401  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.091542  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.091664  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:43.091806  224508 main.go:141] libmachine: Using SSH client type: native
	I1030 23:15:43.092123  224508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1030 23:15:43.092134  224508 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 23:15:43.209409  224508 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gea8740b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1030 23:15:43.209471  224508 main.go:141] libmachine: found compatible host: buildroot
	I1030 23:15:43.209485  224508 main.go:141] libmachine: Provisioning with buildroot...
	I1030 23:15:43.209503  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetMachineName
	I1030 23:15:43.209782  224508 buildroot.go:166] provisioning hostname "ingress-addon-legacy-371910"
	I1030 23:15:43.209806  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetMachineName
	I1030 23:15:43.210033  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:43.213649  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.214070  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:43.214108  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.214202  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:43.214409  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.214602  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.214725  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:43.214930  224508 main.go:141] libmachine: Using SSH client type: native
	I1030 23:15:43.215258  224508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1030 23:15:43.215274  224508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-371910 && echo "ingress-addon-legacy-371910" | sudo tee /etc/hostname
	I1030 23:15:43.345076  224508 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-371910
	
	I1030 23:15:43.345112  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:43.347963  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.348291  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:43.348330  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.348546  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:43.348763  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.348971  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.349126  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:43.349331  224508 main.go:141] libmachine: Using SSH client type: native
	I1030 23:15:43.349673  224508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1030 23:15:43.349700  224508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-371910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-371910/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-371910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 23:15:43.478454  224508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:15:43.478489  224508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1030 23:15:43.478529  224508 buildroot.go:174] setting up certificates
	I1030 23:15:43.478542  224508 provision.go:83] configureAuth start
	I1030 23:15:43.478563  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetMachineName
	I1030 23:15:43.478876  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetIP
	I1030 23:15:43.481680  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.482040  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:43.482067  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.482256  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:43.484504  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.484825  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:43.484889  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.484980  224508 provision.go:138] copyHostCerts
	I1030 23:15:43.485010  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:15:43.485039  224508 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1030 23:15:43.485057  224508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:15:43.485117  224508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1030 23:15:43.485206  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:15:43.485231  224508 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1030 23:15:43.485241  224508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:15:43.485267  224508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1030 23:15:43.485309  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:15:43.485324  224508 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1030 23:15:43.485330  224508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:15:43.485350  224508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1030 23:15:43.485402  224508 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-371910 san=[192.168.39.84 192.168.39.84 localhost 127.0.0.1 minikube ingress-addon-legacy-371910]
	I1030 23:15:43.670760  224508 provision.go:172] copyRemoteCerts
	I1030 23:15:43.670835  224508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 23:15:43.670868  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:43.673888  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.674246  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:43.674283  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.674428  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:43.674681  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.674868  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:43.675015  224508 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa Username:docker}
	I1030 23:15:43.763424  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 23:15:43.763520  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1030 23:15:43.792276  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 23:15:43.792346  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 23:15:43.819695  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 23:15:43.819762  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1030 23:15:43.845015  224508 provision.go:86] duration metric: configureAuth took 366.457289ms
	I1030 23:15:43.845040  224508 buildroot.go:189] setting minikube options for container-runtime
	I1030 23:15:43.845264  224508 config.go:182] Loaded profile config "ingress-addon-legacy-371910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1030 23:15:43.845355  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:43.848005  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.848346  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:43.848381  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:43.848528  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:43.848698  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.848905  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:43.849071  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:43.849234  224508 main.go:141] libmachine: Using SSH client type: native
	I1030 23:15:43.849584  224508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1030 23:15:43.849609  224508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 23:15:44.431894  224508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 23:15:44.431926  224508 main.go:141] libmachine: Checking connection to Docker...
	I1030 23:15:44.431937  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetURL
	I1030 23:15:44.433217  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Using libvirt version 6000000
	I1030 23:15:44.435469  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.435906  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:44.435933  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.436101  224508 main.go:141] libmachine: Docker is up and running!
	I1030 23:15:44.436115  224508 main.go:141] libmachine: Reticulating splines...
	I1030 23:15:44.436122  224508 client.go:171] LocalClient.Create took 24.72208476s
	I1030 23:15:44.436144  224508 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-371910" took 24.722142295s
	I1030 23:15:44.436155  224508 start.go:300] post-start starting for "ingress-addon-legacy-371910" (driver="kvm2")
	I1030 23:15:44.436165  224508 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 23:15:44.436187  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:15:44.436425  224508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 23:15:44.436450  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:44.438554  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.438907  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:44.438931  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.439099  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:44.439311  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:44.439468  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:44.439640  224508 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa Username:docker}
	I1030 23:15:44.525483  224508 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 23:15:44.529986  224508 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 23:15:44.530014  224508 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1030 23:15:44.530094  224508 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1030 23:15:44.530184  224508 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1030 23:15:44.530196  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /etc/ssl/certs/2160052.pem
	I1030 23:15:44.530308  224508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 23:15:44.538812  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:15:44.563101  224508 start.go:303] post-start completed in 126.927117ms
	I1030 23:15:44.563157  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetConfigRaw
	I1030 23:15:44.591292  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetIP
	I1030 23:15:44.593937  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.594309  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:44.594345  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.594646  224508 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/config.json ...
	I1030 23:15:44.594860  224508 start.go:128] duration metric: createHost completed in 24.899906265s
	I1030 23:15:44.594889  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:44.597119  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.597417  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:44.597452  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.597587  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:44.597782  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:44.597973  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:44.598098  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:44.598266  224508 main.go:141] libmachine: Using SSH client type: native
	I1030 23:15:44.598723  224508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1030 23:15:44.598742  224508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1030 23:15:44.717778  224508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698707744.688010190
	
	I1030 23:15:44.717800  224508 fix.go:206] guest clock: 1698707744.688010190
	I1030 23:15:44.717807  224508 fix.go:219] Guest: 2023-10-30 23:15:44.68801019 +0000 UTC Remote: 2023-10-30 23:15:44.594874229 +0000 UTC m=+33.776102013 (delta=93.135961ms)
	I1030 23:15:44.717856  224508 fix.go:190] guest clock delta is within tolerance: 93.135961ms
	I1030 23:15:44.717864  224508 start.go:83] releasing machines lock for "ingress-addon-legacy-371910", held for 25.023022588s
	I1030 23:15:44.717890  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:15:44.718193  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetIP
	I1030 23:15:44.720845  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.721178  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:44.721212  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.721337  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:15:44.721810  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:15:44.722070  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:15:44.722142  224508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 23:15:44.722193  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:44.722312  224508 ssh_runner.go:195] Run: cat /version.json
	I1030 23:15:44.722334  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:15:44.724916  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.725097  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.725338  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:44.725368  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.725483  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:44.725594  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:44.725620  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:44.725641  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:44.725809  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:15:44.725816  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:44.725992  224508 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa Username:docker}
	I1030 23:15:44.726028  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:15:44.726173  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:15:44.726321  224508 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa Username:docker}
	I1030 23:15:44.809789  224508 ssh_runner.go:195] Run: systemctl --version
	I1030 23:15:44.847654  224508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 23:15:45.002206  224508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 23:15:45.008131  224508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 23:15:45.008203  224508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 23:15:45.022128  224508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 23:15:45.022154  224508 start.go:472] detecting cgroup driver to use...
	I1030 23:15:45.022215  224508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 23:15:45.035051  224508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 23:15:45.046597  224508 docker.go:198] disabling cri-docker service (if available) ...
	I1030 23:15:45.046641  224508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 23:15:45.057922  224508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 23:15:45.069471  224508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 23:15:45.166464  224508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 23:15:45.276655  224508 docker.go:214] disabling docker service ...
	I1030 23:15:45.276726  224508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 23:15:45.290388  224508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 23:15:45.302144  224508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 23:15:45.399728  224508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 23:15:45.497941  224508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 23:15:45.510248  224508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 23:15:45.527223  224508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1030 23:15:45.527295  224508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:15:45.536135  224508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 23:15:45.536196  224508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:15:45.545705  224508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:15:45.555128  224508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:15:45.564370  224508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 23:15:45.574234  224508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 23:15:45.582226  224508 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 23:15:45.582283  224508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 23:15:45.594244  224508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 23:15:45.602301  224508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 23:15:45.704130  224508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 23:15:45.860331  224508 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 23:15:45.860423  224508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 23:15:45.867368  224508 start.go:540] Will wait 60s for crictl version
	I1030 23:15:45.867434  224508 ssh_runner.go:195] Run: which crictl
	I1030 23:15:45.871918  224508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 23:15:45.909699  224508 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1030 23:15:45.909795  224508 ssh_runner.go:195] Run: crio --version
	I1030 23:15:45.958780  224508 ssh_runner.go:195] Run: crio --version
	I1030 23:15:46.003032  224508 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1030 23:15:46.004419  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetIP
	I1030 23:15:46.006967  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:46.007299  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:15:46.007327  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:15:46.007559  224508 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 23:15:46.011435  224508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:15:46.022878  224508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1030 23:15:46.022949  224508 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 23:15:46.053986  224508 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1030 23:15:46.054113  224508 ssh_runner.go:195] Run: which lz4
	I1030 23:15:46.058020  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1030 23:15:46.058114  224508 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1030 23:15:46.062222  224508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 23:15:46.062263  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1030 23:15:47.917021  224508 crio.go:444] Took 1.858929 seconds to copy over tarball
	I1030 23:15:47.917111  224508 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 23:15:51.033024  224508 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.115876948s)
	I1030 23:15:51.033058  224508 crio.go:451] Took 3.116008 seconds to extract the tarball
	I1030 23:15:51.033068  224508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 23:15:51.077559  224508 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 23:15:51.130985  224508 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1030 23:15:51.131017  224508 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1030 23:15:51.131103  224508 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 23:15:51.131128  224508 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1030 23:15:51.131159  224508 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1030 23:15:51.131184  224508 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1030 23:15:51.131142  224508 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1030 23:15:51.131216  224508 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1030 23:15:51.131231  224508 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1030 23:15:51.131231  224508 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1030 23:15:51.135355  224508 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1030 23:15:51.135384  224508 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1030 23:15:51.135411  224508 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1030 23:15:51.135355  224508 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1030 23:15:51.135358  224508 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1030 23:15:51.135432  224508 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 23:15:51.135464  224508 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1030 23:15:51.136253  224508 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1030 23:15:51.305838  224508 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1030 23:15:51.306571  224508 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1030 23:15:51.318446  224508 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1030 23:15:51.324357  224508 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1030 23:15:51.338197  224508 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1030 23:15:51.354271  224508 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1030 23:15:51.374875  224508 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1030 23:15:51.401403  224508 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1030 23:15:51.401457  224508 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1030 23:15:51.401501  224508 ssh_runner.go:195] Run: which crictl
	I1030 23:15:51.401509  224508 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1030 23:15:51.401552  224508 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1030 23:15:51.401600  224508 ssh_runner.go:195] Run: which crictl
	I1030 23:15:51.442243  224508 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1030 23:15:51.442288  224508 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1030 23:15:51.442340  224508 ssh_runner.go:195] Run: which crictl
	I1030 23:15:51.442346  224508 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1030 23:15:51.442378  224508 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1030 23:15:51.442430  224508 ssh_runner.go:195] Run: which crictl
	I1030 23:15:51.470800  224508 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 23:15:51.480223  224508 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1030 23:15:51.480280  224508 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1030 23:15:51.480334  224508 ssh_runner.go:195] Run: which crictl
	I1030 23:15:51.481763  224508 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1030 23:15:51.481797  224508 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1030 23:15:51.481828  224508 ssh_runner.go:195] Run: which crictl
	I1030 23:15:51.496005  224508 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1030 23:15:51.496049  224508 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1030 23:15:51.496088  224508 ssh_runner.go:195] Run: which crictl
	I1030 23:15:51.496091  224508 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1030 23:15:51.496168  224508 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1030 23:15:51.496216  224508 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1030 23:15:51.496230  224508 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1030 23:15:51.701954  224508 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1030 23:15:51.701995  224508 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1030 23:15:51.702013  224508 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1030 23:15:51.702105  224508 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1030 23:15:51.702134  224508 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1030 23:15:51.702249  224508 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1030 23:15:51.702318  224508 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1030 23:15:51.767771  224508 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1030 23:15:51.767857  224508 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1030 23:15:51.767970  224508 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1030 23:15:51.768010  224508 cache_images.go:92] LoadImages completed in 636.978164ms
	W1030 23:15:51.768100  224508 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1030 23:15:51.768151  224508 ssh_runner.go:195] Run: crio config
	I1030 23:15:51.824081  224508 cni.go:84] Creating CNI manager for ""
	I1030 23:15:51.824114  224508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 23:15:51.824144  224508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1030 23:15:51.824169  224508 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-371910 NodeName:ingress-addon-legacy-371910 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1030 23:15:51.824504  224508 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-371910"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 23:15:51.824643  224508 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-371910 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-371910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1030 23:15:51.824728  224508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1030 23:15:51.833753  224508 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 23:15:51.833844  224508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 23:15:51.843891  224508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1030 23:15:51.860347  224508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1030 23:15:51.876522  224508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1030 23:15:51.893124  224508 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I1030 23:15:51.896883  224508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:15:51.909999  224508 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910 for IP: 192.168.39.84
	I1030 23:15:51.910032  224508 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:15:51.910183  224508 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1030 23:15:51.910227  224508 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1030 23:15:51.910271  224508 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.key
	I1030 23:15:51.910287  224508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt with IP's: []
	I1030 23:15:51.955319  224508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt ...
	I1030 23:15:51.955350  224508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: {Name:mk82973a8173ceea47bab5a8cc7f3569b1fe39e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:15:51.955519  224508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.key ...
	I1030 23:15:51.955530  224508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.key: {Name:mka74435a06ffcd9a37e8bd6df1cae4651d83ffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:15:51.955605  224508 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.key.2e1821a6
	I1030 23:15:51.955620  224508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.crt.2e1821a6 with IP's: [192.168.39.84 10.96.0.1 127.0.0.1 10.0.0.1]
	I1030 23:15:52.023412  224508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.crt.2e1821a6 ...
	I1030 23:15:52.023444  224508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.crt.2e1821a6: {Name:mk0abea165d4ccabe954739e43b5f50e52873d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:15:52.023605  224508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.key.2e1821a6 ...
	I1030 23:15:52.023616  224508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.key.2e1821a6: {Name:mka82c8b9056965cc5362519ca78d28916959807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:15:52.023699  224508 certs.go:337] copying /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.crt.2e1821a6 -> /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.crt
	I1030 23:15:52.023764  224508 certs.go:341] copying /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.key.2e1821a6 -> /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.key
	I1030 23:15:52.023813  224508 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.key
	I1030 23:15:52.023826  224508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.crt with IP's: []
	I1030 23:15:52.123117  224508 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.crt ...
	I1030 23:15:52.123151  224508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.crt: {Name:mk0e47da4933438820a955ef52f27353fd4b7589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:15:52.123304  224508 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.key ...
	I1030 23:15:52.123315  224508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.key: {Name:mk2fd6688682f398bc88efc6c0252efbf6f68975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:15:52.123385  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 23:15:52.123402  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 23:15:52.123412  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 23:15:52.123425  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 23:15:52.123434  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 23:15:52.123447  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 23:15:52.123460  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 23:15:52.123472  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 23:15:52.123533  224508 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1030 23:15:52.123609  224508 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1030 23:15:52.123625  224508 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 23:15:52.123676  224508 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1030 23:15:52.123702  224508 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1030 23:15:52.123731  224508 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1030 23:15:52.123772  224508 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:15:52.123804  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /usr/share/ca-certificates/2160052.pem
	I1030 23:15:52.123818  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:15:52.123830  224508 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem -> /usr/share/ca-certificates/216005.pem
	I1030 23:15:52.124487  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1030 23:15:52.149491  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 23:15:52.172514  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 23:15:52.195261  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1030 23:15:52.217647  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 23:15:52.239889  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 23:15:52.263287  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 23:15:52.285280  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1030 23:15:52.307063  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1030 23:15:52.329972  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 23:15:52.353340  224508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1030 23:15:52.375720  224508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1030 23:15:52.391190  224508 ssh_runner.go:195] Run: openssl version
	I1030 23:15:52.396452  224508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 23:15:52.405644  224508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:15:52.410107  224508 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:15:52.410166  224508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:15:52.415416  224508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 23:15:52.424673  224508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1030 23:15:52.433843  224508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1030 23:15:52.438162  224508 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:15:52.438205  224508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1030 23:15:52.443411  224508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1030 23:15:52.452318  224508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1030 23:15:52.461382  224508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1030 23:15:52.466023  224508 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:15:52.466067  224508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1030 23:15:52.472612  224508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 23:15:52.481680  224508 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1030 23:15:52.485321  224508 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:15:52.485375  224508 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-371910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-371910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:15:52.485463  224508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 23:15:52.485561  224508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 23:15:52.522038  224508 cri.go:89] found id: ""
	I1030 23:15:52.522115  224508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 23:15:52.530910  224508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 23:15:52.540452  224508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 23:15:52.548577  224508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 23:15:52.548626  224508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1030 23:15:52.598286  224508 kubeadm.go:322] W1030 23:15:52.578784     964 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1030 23:15:52.731028  224508 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 23:15:55.275625  224508 kubeadm.go:322] W1030 23:15:55.258941     964 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1030 23:15:55.276765  224508 kubeadm.go:322] W1030 23:15:55.260050     964 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1030 23:16:05.335207  224508 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1030 23:16:05.335265  224508 kubeadm.go:322] [preflight] Running pre-flight checks
	I1030 23:16:05.335343  224508 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 23:16:05.335444  224508 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 23:16:05.335545  224508 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 23:16:05.335680  224508 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 23:16:05.335823  224508 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 23:16:05.335890  224508 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1030 23:16:05.335985  224508 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 23:16:05.337351  224508 out.go:204]   - Generating certificates and keys ...
	I1030 23:16:05.337461  224508 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1030 23:16:05.337561  224508 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1030 23:16:05.337659  224508 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 23:16:05.337734  224508 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1030 23:16:05.337820  224508 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1030 23:16:05.337898  224508 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1030 23:16:05.337975  224508 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1030 23:16:05.338152  224508 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-371910 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
	I1030 23:16:05.338226  224508 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1030 23:16:05.338385  224508 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-371910 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
	I1030 23:16:05.338474  224508 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 23:16:05.338539  224508 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 23:16:05.338600  224508 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1030 23:16:05.338650  224508 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 23:16:05.338692  224508 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 23:16:05.338739  224508 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 23:16:05.338800  224508 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 23:16:05.338846  224508 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 23:16:05.338906  224508 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 23:16:05.341129  224508 out.go:204]   - Booting up control plane ...
	I1030 23:16:05.341229  224508 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 23:16:05.341303  224508 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 23:16:05.341374  224508 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 23:16:05.341457  224508 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 23:16:05.341611  224508 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 23:16:05.341677  224508 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502789 seconds
	I1030 23:16:05.341768  224508 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 23:16:05.341882  224508 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 23:16:05.341954  224508 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 23:16:05.342070  224508 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-371910 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1030 23:16:05.342117  224508 kubeadm.go:322] [bootstrap-token] Using token: 4toabd.7wwixmss81rilda5
	I1030 23:16:05.343510  224508 out.go:204]   - Configuring RBAC rules ...
	I1030 23:16:05.343599  224508 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 23:16:05.343692  224508 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 23:16:05.343842  224508 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 23:16:05.343982  224508 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 23:16:05.344096  224508 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 23:16:05.344173  224508 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 23:16:05.344270  224508 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 23:16:05.344340  224508 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1030 23:16:05.344421  224508 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1030 23:16:05.344432  224508 kubeadm.go:322] 
	I1030 23:16:05.344510  224508 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1030 23:16:05.344525  224508 kubeadm.go:322] 
	I1030 23:16:05.344623  224508 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1030 23:16:05.344633  224508 kubeadm.go:322] 
	I1030 23:16:05.344669  224508 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1030 23:16:05.344756  224508 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 23:16:05.344831  224508 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 23:16:05.344842  224508 kubeadm.go:322] 
	I1030 23:16:05.344924  224508 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1030 23:16:05.345040  224508 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 23:16:05.345097  224508 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 23:16:05.345103  224508 kubeadm.go:322] 
	I1030 23:16:05.345177  224508 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 23:16:05.345247  224508 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1030 23:16:05.345256  224508 kubeadm.go:322] 
	I1030 23:16:05.345326  224508 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4toabd.7wwixmss81rilda5 \
	I1030 23:16:05.345413  224508 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1030 23:16:05.345443  224508 kubeadm.go:322]     --control-plane 
	I1030 23:16:05.345449  224508 kubeadm.go:322] 
	I1030 23:16:05.345517  224508 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1030 23:16:05.345525  224508 kubeadm.go:322] 
	I1030 23:16:05.345596  224508 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4toabd.7wwixmss81rilda5 \
	I1030 23:16:05.345701  224508 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1030 23:16:05.345713  224508 cni.go:84] Creating CNI manager for ""
	I1030 23:16:05.345723  224508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 23:16:05.347225  224508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1030 23:16:05.348373  224508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1030 23:16:05.363316  224508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1030 23:16:05.381920  224508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 23:16:05.382035  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:05.382044  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=ingress-addon-legacy-371910 minikube.k8s.io/updated_at=2023_10_30T23_16_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:05.406691  224508 ops.go:34] apiserver oom_adj: -16
	I1030 23:16:05.642513  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:05.806939  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:06.406701  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:06.906615  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:07.406768  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:07.906326  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:08.407040  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:08.906863  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:09.406744  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:09.907045  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:10.406884  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:10.906780  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:11.406099  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:11.906890  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:12.407053  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:12.906850  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:13.406200  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:13.906735  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:14.406259  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:14.906302  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:15.406628  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:15.906741  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:16.406911  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:16.906417  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:17.406906  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:17.907042  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:18.406758  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:18.906824  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:19.406704  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:19.906377  224508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:16:20.013956  224508 kubeadm.go:1081] duration metric: took 14.631999367s to wait for elevateKubeSystemPrivileges.
	I1030 23:16:20.013999  224508 kubeadm.go:406] StartCluster complete in 27.528629189s
	I1030 23:16:20.014028  224508 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:16:20.014123  224508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:16:20.014918  224508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:16:20.015147  224508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 23:16:20.015309  224508 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1030 23:16:20.015404  224508 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-371910"
	I1030 23:16:20.015407  224508 config.go:182] Loaded profile config "ingress-addon-legacy-371910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1030 23:16:20.015423  224508 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-371910"
	I1030 23:16:20.015456  224508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-371910"
	I1030 23:16:20.015428  224508 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-371910"
	I1030 23:16:20.015607  224508 host.go:66] Checking if "ingress-addon-legacy-371910" exists ...
	I1030 23:16:20.015783  224508 kapi.go:59] client config for ingress-addon-legacy-371910: &rest.Config{Host:"https://192.168.39.84:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:16:20.016059  224508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:16:20.016098  224508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:16:20.016111  224508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:16:20.016142  224508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:16:20.016636  224508 cert_rotation.go:137] Starting client certificate rotation controller
	I1030 23:16:20.036558  224508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I1030 23:16:20.036559  224508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I1030 23:16:20.037060  224508 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:16:20.037139  224508 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:16:20.037617  224508 main.go:141] libmachine: Using API Version  1
	I1030 23:16:20.037617  224508 main.go:141] libmachine: Using API Version  1
	I1030 23:16:20.037645  224508 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:16:20.037655  224508 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:16:20.038026  224508 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:16:20.038074  224508 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:16:20.038250  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetState
	I1030 23:16:20.038540  224508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:16:20.038572  224508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:16:20.041096  224508 kapi.go:59] client config for ingress-addon-legacy-371910: &rest.Config{Host:"https://192.168.39.84:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:16:20.041493  224508 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-371910"
	I1030 23:16:20.041544  224508 host.go:66] Checking if "ingress-addon-legacy-371910" exists ...
	I1030 23:16:20.042007  224508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:16:20.042049  224508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:16:20.050003  224508 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-371910" context rescaled to 1 replicas
	I1030 23:16:20.050042  224508 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 23:16:20.052148  224508 out.go:177] * Verifying Kubernetes components...
	I1030 23:16:20.053643  224508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:16:20.054493  224508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I1030 23:16:20.054918  224508 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:16:20.055500  224508 main.go:141] libmachine: Using API Version  1
	I1030 23:16:20.055530  224508 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:16:20.055908  224508 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:16:20.056131  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetState
	I1030 23:16:20.056984  224508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I1030 23:16:20.057488  224508 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:16:20.057982  224508 main.go:141] libmachine: Using API Version  1
	I1030 23:16:20.058005  224508 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:16:20.058022  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:16:20.060127  224508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 23:16:20.058335  224508 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:16:20.061685  224508 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 23:16:20.061702  224508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 23:16:20.060759  224508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:16:20.061722  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:16:20.061759  224508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:16:20.065148  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:16:20.065658  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:16:20.065693  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:16:20.065879  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:16:20.066095  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:16:20.066284  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:16:20.066470  224508 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa Username:docker}
	I1030 23:16:20.077872  224508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I1030 23:16:20.078511  224508 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:16:20.079161  224508 main.go:141] libmachine: Using API Version  1
	I1030 23:16:20.079196  224508 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:16:20.079569  224508 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:16:20.079757  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetState
	I1030 23:16:20.081502  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .DriverName
	I1030 23:16:20.081762  224508 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 23:16:20.081781  224508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 23:16:20.081800  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHHostname
	I1030 23:16:20.084639  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:16:20.085071  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:59:00", ip: ""} in network mk-ingress-addon-legacy-371910: {Iface:virbr1 ExpiryTime:2023-10-31 00:15:35 +0000 UTC Type:0 Mac:52:54:00:df:59:00 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ingress-addon-legacy-371910 Clientid:01:52:54:00:df:59:00}
	I1030 23:16:20.085114  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | domain ingress-addon-legacy-371910 has defined IP address 192.168.39.84 and MAC address 52:54:00:df:59:00 in network mk-ingress-addon-legacy-371910
	I1030 23:16:20.085238  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHPort
	I1030 23:16:20.085411  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHKeyPath
	I1030 23:16:20.085575  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .GetSSHUsername
	I1030 23:16:20.085736  224508 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/ingress-addon-legacy-371910/id_rsa Username:docker}
	I1030 23:16:20.211027  224508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 23:16:20.217603  224508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 23:16:20.218190  224508 kapi.go:59] client config for ingress-addon-legacy-371910: &rest.Config{Host:"https://192.168.39.84:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:16:20.218566  224508 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-371910" to be "Ready" ...
	I1030 23:16:20.282695  224508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 23:16:20.331981  224508 node_ready.go:49] node "ingress-addon-legacy-371910" has status "Ready":"True"
	I1030 23:16:20.332005  224508 node_ready.go:38] duration metric: took 113.417529ms waiting for node "ingress-addon-legacy-371910" to be "Ready" ...
	I1030 23:16:20.332016  224508 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:16:20.408733  224508 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-8btxz" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:21.045790  224508 main.go:141] libmachine: Making call to close driver server
	I1030 23:16:21.045825  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .Close
	I1030 23:16:21.045865  224508 main.go:141] libmachine: Making call to close driver server
	I1030 23:16:21.045790  224508 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 23:16:21.045887  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .Close
	I1030 23:16:21.046170  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Closing plugin on server side
	I1030 23:16:21.046176  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Closing plugin on server side
	I1030 23:16:21.046221  224508 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:16:21.046226  224508 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:16:21.046235  224508 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:16:21.046241  224508 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:16:21.046252  224508 main.go:141] libmachine: Making call to close driver server
	I1030 23:16:21.046268  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .Close
	I1030 23:16:21.046254  224508 main.go:141] libmachine: Making call to close driver server
	I1030 23:16:21.046299  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .Close
	I1030 23:16:21.046531  224508 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:16:21.046570  224508 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:16:21.046583  224508 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:16:21.046614  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Closing plugin on server side
	I1030 23:16:21.046594  224508 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:16:21.046538  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Closing plugin on server side
	I1030 23:16:21.062167  224508 main.go:141] libmachine: Making call to close driver server
	I1030 23:16:21.062184  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) Calling .Close
	I1030 23:16:21.062462  224508 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:16:21.062481  224508 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:16:21.062496  224508 main.go:141] libmachine: (ingress-addon-legacy-371910) DBG | Closing plugin on server side
	I1030 23:16:21.064376  224508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1030 23:16:21.065742  224508 addons.go:502] enable addons completed in 1.050452545s: enabled=[storage-provisioner default-storageclass]
	I1030 23:16:22.811242  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:24.811934  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:26.813009  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:29.310682  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:31.312187  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:33.811969  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:36.311569  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:38.312156  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:40.811040  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:42.811914  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:45.311524  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:47.312253  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:49.312649  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:51.813314  224508 pod_ready.go:102] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"False"
	I1030 23:16:53.312459  224508 pod_ready.go:92] pod "coredns-66bff467f8-8btxz" in "kube-system" namespace has status "Ready":"True"
	I1030 23:16:53.312481  224508 pod_ready.go:81] duration metric: took 32.903708017s waiting for pod "coredns-66bff467f8-8btxz" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.312489  224508 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-371910" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.316547  224508 pod_ready.go:92] pod "etcd-ingress-addon-legacy-371910" in "kube-system" namespace has status "Ready":"True"
	I1030 23:16:53.316575  224508 pod_ready.go:81] duration metric: took 4.072728ms waiting for pod "etcd-ingress-addon-legacy-371910" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.316591  224508 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-371910" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.321920  224508 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-371910" in "kube-system" namespace has status "Ready":"True"
	I1030 23:16:53.321936  224508 pod_ready.go:81] duration metric: took 5.337099ms waiting for pod "kube-apiserver-ingress-addon-legacy-371910" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.321944  224508 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-371910" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.326440  224508 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-371910" in "kube-system" namespace has status "Ready":"True"
	I1030 23:16:53.326458  224508 pod_ready.go:81] duration metric: took 4.508256ms waiting for pod "kube-controller-manager-ingress-addon-legacy-371910" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.326477  224508 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vwtpq" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.331746  224508 pod_ready.go:92] pod "kube-proxy-vwtpq" in "kube-system" namespace has status "Ready":"True"
	I1030 23:16:53.331762  224508 pod_ready.go:81] duration metric: took 5.278547ms waiting for pod "kube-proxy-vwtpq" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.331769  224508 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-371910" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.506114  224508 request.go:629] Waited for 174.267212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.84:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-371910
	I1030 23:16:53.706838  224508 request.go:629] Waited for 198.330792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.84:8443/api/v1/nodes/ingress-addon-legacy-371910
	I1030 23:16:53.711373  224508 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-371910" in "kube-system" namespace has status "Ready":"True"
	I1030 23:16:53.711393  224508 pod_ready.go:81] duration metric: took 379.618616ms waiting for pod "kube-scheduler-ingress-addon-legacy-371910" in "kube-system" namespace to be "Ready" ...
	I1030 23:16:53.711407  224508 pod_ready.go:38] duration metric: took 33.379377086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:16:53.711426  224508 api_server.go:52] waiting for apiserver process to appear ...
	I1030 23:16:53.711475  224508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:16:53.726303  224508 api_server.go:72] duration metric: took 33.676230547s to wait for apiserver process to appear ...
	I1030 23:16:53.726321  224508 api_server.go:88] waiting for apiserver healthz status ...
	I1030 23:16:53.726337  224508 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I1030 23:16:53.733267  224508 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I1030 23:16:53.734327  224508 api_server.go:141] control plane version: v1.18.20
	I1030 23:16:53.734349  224508 api_server.go:131] duration metric: took 8.021986ms to wait for apiserver health ...
	I1030 23:16:53.734356  224508 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 23:16:53.906808  224508 request.go:629] Waited for 172.350019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.84:8443/api/v1/namespaces/kube-system/pods
	I1030 23:16:53.912116  224508 system_pods.go:59] 7 kube-system pods found
	I1030 23:16:53.912149  224508 system_pods.go:61] "coredns-66bff467f8-8btxz" [d2f54fda-2da9-4595-83cf-626d79f41e88] Running
	I1030 23:16:53.912154  224508 system_pods.go:61] "etcd-ingress-addon-legacy-371910" [0ab09da8-dfb5-48e3-96ff-f9bc6ca1cdf9] Running
	I1030 23:16:53.912159  224508 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-371910" [3a04a327-dfde-408a-ba5d-1796fc0188c6] Running
	I1030 23:16:53.912163  224508 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-371910" [a88e14aa-4cef-4977-84e2-d04f768b1889] Running
	I1030 23:16:53.912167  224508 system_pods.go:61] "kube-proxy-vwtpq" [46c3c09b-5ce4-49e7-95d4-0fbbfaa3af8b] Running
	I1030 23:16:53.912171  224508 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-371910" [302f7157-84a0-4aee-933a-a7ce7e41b434] Running
	I1030 23:16:53.912175  224508 system_pods.go:61] "storage-provisioner" [c22db9fe-302a-448e-a0bb-d16f660deb43] Running
	I1030 23:16:53.912183  224508 system_pods.go:74] duration metric: took 177.821043ms to wait for pod list to return data ...
	I1030 23:16:53.912191  224508 default_sa.go:34] waiting for default service account to be created ...
	I1030 23:16:54.106707  224508 request.go:629] Waited for 194.400874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.84:8443/api/v1/namespaces/default/serviceaccounts
	I1030 23:16:54.109288  224508 default_sa.go:45] found service account: "default"
	I1030 23:16:54.109328  224508 default_sa.go:55] duration metric: took 197.121934ms for default service account to be created ...
	I1030 23:16:54.109341  224508 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 23:16:54.306803  224508 request.go:629] Waited for 197.383774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.84:8443/api/v1/namespaces/kube-system/pods
	I1030 23:16:54.313893  224508 system_pods.go:86] 7 kube-system pods found
	I1030 23:16:54.313920  224508 system_pods.go:89] "coredns-66bff467f8-8btxz" [d2f54fda-2da9-4595-83cf-626d79f41e88] Running
	I1030 23:16:54.313926  224508 system_pods.go:89] "etcd-ingress-addon-legacy-371910" [0ab09da8-dfb5-48e3-96ff-f9bc6ca1cdf9] Running
	I1030 23:16:54.313930  224508 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-371910" [3a04a327-dfde-408a-ba5d-1796fc0188c6] Running
	I1030 23:16:54.313934  224508 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-371910" [a88e14aa-4cef-4977-84e2-d04f768b1889] Running
	I1030 23:16:54.313938  224508 system_pods.go:89] "kube-proxy-vwtpq" [46c3c09b-5ce4-49e7-95d4-0fbbfaa3af8b] Running
	I1030 23:16:54.313943  224508 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-371910" [302f7157-84a0-4aee-933a-a7ce7e41b434] Running
	I1030 23:16:54.313948  224508 system_pods.go:89] "storage-provisioner" [c22db9fe-302a-448e-a0bb-d16f660deb43] Running
	I1030 23:16:54.313954  224508 system_pods.go:126] duration metric: took 204.606239ms to wait for k8s-apps to be running ...
	I1030 23:16:54.313965  224508 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 23:16:54.314011  224508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:16:54.328032  224508 system_svc.go:56] duration metric: took 14.056798ms WaitForService to wait for kubelet.
	I1030 23:16:54.328056  224508 kubeadm.go:581] duration metric: took 34.277987762s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1030 23:16:54.328082  224508 node_conditions.go:102] verifying NodePressure condition ...
	I1030 23:16:54.506511  224508 request.go:629] Waited for 178.35558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.84:8443/api/v1/nodes
	I1030 23:16:54.509609  224508 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:16:54.509638  224508 node_conditions.go:123] node cpu capacity is 2
	I1030 23:16:54.509652  224508 node_conditions.go:105] duration metric: took 181.564759ms to run NodePressure ...
	I1030 23:16:54.509667  224508 start.go:228] waiting for startup goroutines ...
	I1030 23:16:54.509680  224508 start.go:233] waiting for cluster config update ...
	I1030 23:16:54.509716  224508 start.go:242] writing updated cluster config ...
	I1030 23:16:54.509997  224508 ssh_runner.go:195] Run: rm -f paused
	I1030 23:16:54.556945  224508 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1030 23:16:54.558951  224508 out.go:177] 
	W1030 23:16:54.560382  224508 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1030 23:16:54.561818  224508 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1030 23:16:54.563222  224508 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-371910" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-30 23:15:31 UTC, ends at Mon 2023-10-30 23:19:54 UTC. --
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.416382252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698707994416368807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=97d8868d-c7ed-4bce-91c4-88174c8a1873 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.417165168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dd924d89-4ab9-4f4e-bb1d-6c2c903213d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.417236089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd924d89-4ab9-4f4e-bb1d-6c2c903213d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.417512764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2dde7504c8b3527a7dab70454e856f24f948b473ce7edd9d91d9895f246faca,PodSandboxId:627ab25f8bdf45f09855e20816b09b2c39705393fb0b8423b83c62a34865374a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698707986144575965,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-hn27z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc0ca5a9-f37e-4405-8a87-5adb3483ae0c,},Annotations:map[string]string{io.kubernetes.container.hash: 47413497,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabc2e13a2347d137551d9c72ad2ee0d2a72dbbc27ad4ee58fcaaf1a91d3e3cb,PodSandboxId:8bcb3a05e2fd2087762a81226b1eff3d6236403a450dbd694aa89a814d14cf51,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698707846286373057,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11820c82-f3c8-42ca-952a-6f62808c5557,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cc1d101b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4df0d2bb0fca257b2fe473f8b44705695bfd26f880f22421e9cf1c0fb0ec356,PodSandboxId:a85d28d1a136dd2ae901a70bad3c7f808b1e827c499e29b40ee2b7812e7906dc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698707826853505254,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-9fjnr,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb,},Annotations:map[string]string{io.kubernetes.container.hash: 70aef45d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c9fb964698413df39e2dc83b124004d72167e03a0fda05f8a4af6f994ce7b407,PodSandboxId:546713bede30c878833368380041e864a4b3a579d047ab8ca6b361299663bcb5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698707817850538101,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gs2xm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09daac45-0081-4812-b830-d0039d475eb8,},Annotations:map[string]string{io.kubernetes.container.hash: f0afae89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c3cd51a1be614ed4bb6a8d81d595632724d567164bfad7239c05a5f4c70c4,PodSandboxId:1baab7f5bb917da3b8e78f74d501613e971dd993dde2d5c6e40251d816d20e8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698707817668484618,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2c47f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5cc18aa-9a28-4d20-ae6b-b8a3758ce717,},Annotations:map[string]string{io.kubernetes.container.hash: ef72b469,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6003accb0dd337cf1a06c2d26c1aec194d706ee8f510b75a4ddcda01b8c635,PodSandboxId:2e957ea0da8b3b9aa7f6cf2404b5877caa53effee23117dab79f538f78f7e1a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698707812814814723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22db9fe-302a-448e-a0bb-d16f660deb43,},Annotations:map[string]string{io.kubernetes.container.hash: 1ade01fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f98eb200a062bdf880b0ab7d033f95117c14acab7de4514a0157e0c1818c1f5,PodSandboxId:2e957ea0da8b3b9aa7f6cf2404b5877caa53effee23117dab79f538f78f7e1a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698707782053076587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22db9fe-302a-448e-a0bb-d16f660deb43,},Annotations:map[string]string{io.kubernetes.container.hash: 1ade01fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e52968681754ede52dc88040083fa9420db5c37452fc4e2f66b107cc6a694ec4,PodSandboxId:8d04062b93fec67ff40629d33466d1c103cabb44c312cf928db89c0832a9b017,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698707781420084884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vwtpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c3c09b-5ce4-49e7-95d4-0fbbfaa3af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3abbb424,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064479fc4639b872801a4a37bd78167704f9b0df3079c3c01beccd10b0a4289d,PodSandboxId:c5710ec36416a8a4602b47ddd6c2890772f43bcf6714e838f236927ab4a852ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a
754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698707781044510189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8btxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f54fda-2da9-4595-83cf-626d79f41e88,},Annotations:map[string]string{io.kubernetes.container.hash: 417ebc8e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d11719a9acc33d103ef80e2733ee1670f165bed1ff2cd380ea146adb4108d3,PodSa
ndboxId:f57d9e72c6a553c974b433a084000bd54612c82145ae876d3b5d7a027d7922e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698707758566561433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa7972a2ebffa444c853a35cc9c002,},Annotations:map[string]string{io.kubernetes.container.hash: bab8556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8212fe614e898e9aa5ddeeba0419d08af6a77a84631657d932c892086cae34b1,PodSandboxId:f6b2ca5d13643628f34ae8cc6451cd1c5afb9e1
05ce390d206ecc54cbae14c9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698707757522825097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80911df7b5bcbdec61c606d9828533ab7e4472fe773990b05816fabcab70a798,PodSandboxId:baccba62ceada04c10800773fc5837ff8147a6148b8fd
55a322415d33bda6314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698707757191291628,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c76e30472cef3ba2ed39a13e736f304fc35553f0c03e54fd681500be87c29f,PodSandboxId:99e420e6a324d68
14ecc3068ae0c7b13701fddee129de96a731a8d3060b88b85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698707756921266044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baec6d501b85c0f34ab73a18f30a0a68,},Annotations:map[string]string{io.kubernetes.container.hash: 60f75551,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd924d89-4ab9-4f4e-bb1d-6c2c903213d2 name=/runtime.v1.RuntimeServic
e/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.461874289Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bb4e3b00-9e0f-4f6e-8777-f259845f7e96 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.462006617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bb4e3b00-9e0f-4f6e-8777-f259845f7e96 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.464350103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=38213a6a-3452-4c3a-abe9-88f6ac2cf7fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.464994136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698707994464978722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=38213a6a-3452-4c3a-abe9-88f6ac2cf7fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.465968431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=222c6bc0-7d79-4410-a09c-0285823f905e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.466046161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=222c6bc0-7d79-4410-a09c-0285823f905e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.466399624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2dde7504c8b3527a7dab70454e856f24f948b473ce7edd9d91d9895f246faca,PodSandboxId:627ab25f8bdf45f09855e20816b09b2c39705393fb0b8423b83c62a34865374a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698707986144575965,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-hn27z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc0ca5a9-f37e-4405-8a87-5adb3483ae0c,},Annotations:map[string]string{io.kubernetes.container.hash: 47413497,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabc2e13a2347d137551d9c72ad2ee0d2a72dbbc27ad4ee58fcaaf1a91d3e3cb,PodSandboxId:8bcb3a05e2fd2087762a81226b1eff3d6236403a450dbd694aa89a814d14cf51,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698707846286373057,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11820c82-f3c8-42ca-952a-6f62808c5557,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cc1d101b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4df0d2bb0fca257b2fe473f8b44705695bfd26f880f22421e9cf1c0fb0ec356,PodSandboxId:a85d28d1a136dd2ae901a70bad3c7f808b1e827c499e29b40ee2b7812e7906dc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698707826853505254,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-9fjnr,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb,},Annotations:map[string]string{io.kubernetes.container.hash: 70aef45d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c9fb964698413df39e2dc83b124004d72167e03a0fda05f8a4af6f994ce7b407,PodSandboxId:546713bede30c878833368380041e864a4b3a579d047ab8ca6b361299663bcb5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698707817850538101,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gs2xm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09daac45-0081-4812-b830-d0039d475eb8,},Annotations:map[string]string{io.kubernetes.container.hash: f0afae89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c3cd51a1be614ed4bb6a8d81d595632724d567164bfad7239c05a5f4c70c4,PodSandboxId:1baab7f5bb917da3b8e78f74d501613e971dd993dde2d5c6e40251d816d20e8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698707817668484618,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2c47f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5cc18aa-9a28-4d20-ae6b-b8a3758ce717,},Annotations:map[string]string{io.kubernetes.container.hash: ef72b469,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6003accb0dd337cf1a06c2d26c1aec194d706ee8f510b75a4ddcda01b8c635,PodSandboxId:2e957ea0da8b3b9aa7f6cf2404b5877caa53effee23117dab79f538f78f7e1a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698707812814814723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22db9fe-302a-448e-a0bb-d16f660deb43,},Annotations:map[string]string{io.kubernetes.container.hash: 1ade01fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f98eb200a062bdf880b0ab7d033f95117c14acab7de4514a0157e0c1818c1f5,PodSandboxId:2e957ea0da8b3b9aa7f6cf2404b5877caa53effee23117dab79f538f78f7e1a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698707782053076587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22db9fe-302a-448e-a0bb-d16f660deb43,},Annotations:map[string]string{io.kubernetes.container.hash: 1ade01fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e52968681754ede52dc88040083fa9420db5c37452fc4e2f66b107cc6a694ec4,PodSandboxId:8d04062b93fec67ff40629d33466d1c103cabb44c312cf928db89c0832a9b017,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698707781420084884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vwtpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c3c09b-5ce4-49e7-95d4-0fbbfaa3af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3abbb424,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064479fc4639b872801a4a37bd78167704f9b0df3079c3c01beccd10b0a4289d,PodSandboxId:c5710ec36416a8a4602b47ddd6c2890772f43bcf6714e838f236927ab4a852ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a
754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698707781044510189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8btxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f54fda-2da9-4595-83cf-626d79f41e88,},Annotations:map[string]string{io.kubernetes.container.hash: 417ebc8e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d11719a9acc33d103ef80e2733ee1670f165bed1ff2cd380ea146adb4108d3,PodSa
ndboxId:f57d9e72c6a553c974b433a084000bd54612c82145ae876d3b5d7a027d7922e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698707758566561433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa7972a2ebffa444c853a35cc9c002,},Annotations:map[string]string{io.kubernetes.container.hash: bab8556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8212fe614e898e9aa5ddeeba0419d08af6a77a84631657d932c892086cae34b1,PodSandboxId:f6b2ca5d13643628f34ae8cc6451cd1c5afb9e1
05ce390d206ecc54cbae14c9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698707757522825097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80911df7b5bcbdec61c606d9828533ab7e4472fe773990b05816fabcab70a798,PodSandboxId:baccba62ceada04c10800773fc5837ff8147a6148b8fd
55a322415d33bda6314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698707757191291628,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c76e30472cef3ba2ed39a13e736f304fc35553f0c03e54fd681500be87c29f,PodSandboxId:99e420e6a324d68
14ecc3068ae0c7b13701fddee129de96a731a8d3060b88b85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698707756921266044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baec6d501b85c0f34ab73a18f30a0a68,},Annotations:map[string]string{io.kubernetes.container.hash: 60f75551,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=222c6bc0-7d79-4410-a09c-0285823f905e name=/runtime.v1.RuntimeServic
e/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.506858607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0103b565-db7e-417f-a61c-9d7aeb6be435 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.506906356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0103b565-db7e-417f-a61c-9d7aeb6be435 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.508163848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9d232f95-5117-4cd6-9234-8e48c40251de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.508613944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698707994508601211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=9d232f95-5117-4cd6-9234-8e48c40251de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.509223376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e8322440-fe48-4c6e-b9a6-58b3d0463e41 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.509272722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e8322440-fe48-4c6e-b9a6-58b3d0463e41 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.509521298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2dde7504c8b3527a7dab70454e856f24f948b473ce7edd9d91d9895f246faca,PodSandboxId:627ab25f8bdf45f09855e20816b09b2c39705393fb0b8423b83c62a34865374a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698707986144575965,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-hn27z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc0ca5a9-f37e-4405-8a87-5adb3483ae0c,},Annotations:map[string]string{io.kubernetes.container.hash: 47413497,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabc2e13a2347d137551d9c72ad2ee0d2a72dbbc27ad4ee58fcaaf1a91d3e3cb,PodSandboxId:8bcb3a05e2fd2087762a81226b1eff3d6236403a450dbd694aa89a814d14cf51,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698707846286373057,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11820c82-f3c8-42ca-952a-6f62808c5557,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cc1d101b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4df0d2bb0fca257b2fe473f8b44705695bfd26f880f22421e9cf1c0fb0ec356,PodSandboxId:a85d28d1a136dd2ae901a70bad3c7f808b1e827c499e29b40ee2b7812e7906dc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698707826853505254,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-9fjnr,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb,},Annotations:map[string]string{io.kubernetes.container.hash: 70aef45d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c9fb964698413df39e2dc83b124004d72167e03a0fda05f8a4af6f994ce7b407,PodSandboxId:546713bede30c878833368380041e864a4b3a579d047ab8ca6b361299663bcb5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698707817850538101,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gs2xm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09daac45-0081-4812-b830-d0039d475eb8,},Annotations:map[string]string{io.kubernetes.container.hash: f0afae89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c3cd51a1be614ed4bb6a8d81d595632724d567164bfad7239c05a5f4c70c4,PodSandboxId:1baab7f5bb917da3b8e78f74d501613e971dd993dde2d5c6e40251d816d20e8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698707817668484618,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2c47f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5cc18aa-9a28-4d20-ae6b-b8a3758ce717,},Annotations:map[string]string{io.kubernetes.container.hash: ef72b469,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6003accb0dd337cf1a06c2d26c1aec194d706ee8f510b75a4ddcda01b8c635,PodSandboxId:2e957ea0da8b3b9aa7f6cf2404b5877caa53effee23117dab79f538f78f7e1a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698707812814814723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22db9fe-302a-448e-a0bb-d16f660deb43,},Annotations:map[string]string{io.kubernetes.container.hash: 1ade01fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f98eb200a062bdf880b0ab7d033f95117c14acab7de4514a0157e0c1818c1f5,PodSandboxId:2e957ea0da8b3b9aa7f6cf2404b5877caa53effee23117dab79f538f78f7e1a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698707782053076587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22db9fe-302a-448e-a0bb-d16f660deb43,},Annotations:map[string]string{io.kubernetes.container.hash: 1ade01fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e52968681754ede52dc88040083fa9420db5c37452fc4e2f66b107cc6a694ec4,PodSandboxId:8d04062b93fec67ff40629d33466d1c103cabb44c312cf928db89c0832a9b017,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698707781420084884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vwtpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c3c09b-5ce4-49e7-95d4-0fbbfaa3af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3abbb424,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064479fc4639b872801a4a37bd78167704f9b0df3079c3c01beccd10b0a4289d,PodSandboxId:c5710ec36416a8a4602b47ddd6c2890772f43bcf6714e838f236927ab4a852ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a
754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698707781044510189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8btxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f54fda-2da9-4595-83cf-626d79f41e88,},Annotations:map[string]string{io.kubernetes.container.hash: 417ebc8e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d11719a9acc33d103ef80e2733ee1670f165bed1ff2cd380ea146adb4108d3,PodSa
ndboxId:f57d9e72c6a553c974b433a084000bd54612c82145ae876d3b5d7a027d7922e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698707758566561433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa7972a2ebffa444c853a35cc9c002,},Annotations:map[string]string{io.kubernetes.container.hash: bab8556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8212fe614e898e9aa5ddeeba0419d08af6a77a84631657d932c892086cae34b1,PodSandboxId:f6b2ca5d13643628f34ae8cc6451cd1c5afb9e1
05ce390d206ecc54cbae14c9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698707757522825097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80911df7b5bcbdec61c606d9828533ab7e4472fe773990b05816fabcab70a798,PodSandboxId:baccba62ceada04c10800773fc5837ff8147a6148b8fd
55a322415d33bda6314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698707757191291628,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c76e30472cef3ba2ed39a13e736f304fc35553f0c03e54fd681500be87c29f,PodSandboxId:99e420e6a324d68
14ecc3068ae0c7b13701fddee129de96a731a8d3060b88b85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698707756921266044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baec6d501b85c0f34ab73a18f30a0a68,},Annotations:map[string]string{io.kubernetes.container.hash: 60f75551,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e8322440-fe48-4c6e-b9a6-58b3d0463e41 name=/runtime.v1.RuntimeServic
e/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.542076676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e7d6dc28-f5e0-4907-80c2-944553dda9ca name=/runtime.v1.RuntimeService/Version
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.542119121Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e7d6dc28-f5e0-4907-80c2-944553dda9ca name=/runtime.v1.RuntimeService/Version
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.543228439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=562288c7-fb00-45d8-a741-680b3c40d353 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.543667178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698707994543656288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=562288c7-fb00-45d8-a741-680b3c40d353 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.544559741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6e5f289f-f5a6-440d-a5e2-07c467f351f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.544606260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6e5f289f-f5a6-440d-a5e2-07c467f351f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:19:54 ingress-addon-legacy-371910 crio[722]: time="2023-10-30 23:19:54.544964451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2dde7504c8b3527a7dab70454e856f24f948b473ce7edd9d91d9895f246faca,PodSandboxId:627ab25f8bdf45f09855e20816b09b2c39705393fb0b8423b83c62a34865374a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d,State:CONTAINER_RUNNING,CreatedAt:1698707986144575965,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-hn27z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc0ca5a9-f37e-4405-8a87-5adb3483ae0c,},Annotations:map[string]string{io.kubernetes.container.hash: 47413497,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabc2e13a2347d137551d9c72ad2ee0d2a72dbbc27ad4ee58fcaaf1a91d3e3cb,PodSandboxId:8bcb3a05e2fd2087762a81226b1eff3d6236403a450dbd694aa89a814d14cf51,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1698707846286373057,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11820c82-f3c8-42ca-952a-6f62808c5557,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cc1d101b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4df0d2bb0fca257b2fe473f8b44705695bfd26f880f22421e9cf1c0fb0ec356,PodSandboxId:a85d28d1a136dd2ae901a70bad3c7f808b1e827c499e29b40ee2b7812e7906dc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698707826853505254,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-9fjnr,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb,},Annotations:map[string]string{io.kubernetes.container.hash: 70aef45d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c9fb964698413df39e2dc83b124004d72167e03a0fda05f8a4af6f994ce7b407,PodSandboxId:546713bede30c878833368380041e864a4b3a579d047ab8ca6b361299663bcb5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698707817850538101,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gs2xm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09daac45-0081-4812-b830-d0039d475eb8,},Annotations:map[string]string{io.kubernetes.container.hash: f0afae89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c3cd51a1be614ed4bb6a8d81d595632724d567164bfad7239c05a5f4c70c4,PodSandboxId:1baab7f5bb917da3b8e78f74d501613e971dd993dde2d5c6e40251d816d20e8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698707817668484618,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-2c47f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d5cc18aa-9a28-4d20-ae6b-b8a3758ce717,},Annotations:map[string]string{io.kubernetes.container.hash: ef72b469,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6003accb0dd337cf1a06c2d26c1aec194d706ee8f510b75a4ddcda01b8c635,PodSandboxId:2e957ea0da8b3b9aa7f6cf2404b5877caa53effee23117dab79f538f78f7e1a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698707812814814723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22db9fe-302a-448e-a0bb-d16f660deb43,},Annotations:map[string]string{io.kubernetes.container.hash: 1ade01fe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f98eb200a062bdf880b0ab7d033f95117c14acab7de4514a0157e0c1818c1f5,PodSandboxId:2e957ea0da8b3b9aa7f6cf2404b5877caa53effee23117dab79f538f78f7e1a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698707782053076587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22db9fe-302a-448e-a0bb-d16f660deb43,},Annotations:map[string]string{io.kubernetes.container.hash: 1ade01fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e52968681754ede52dc88040083fa9420db5c37452fc4e2f66b107cc6a694ec4,PodSandboxId:8d04062b93fec67ff40629d33466d1c103cabb44c312cf928db89c0832a9b017,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698707781420084884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vwtpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46c3c09b-5ce4-49e7-95d4-0fbbfaa3af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 3abbb424,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064479fc4639b872801a4a37bd78167704f9b0df3079c3c01beccd10b0a4289d,PodSandboxId:c5710ec36416a8a4602b47ddd6c2890772f43bcf6714e838f236927ab4a852ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a
754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698707781044510189,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8btxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f54fda-2da9-4595-83cf-626d79f41e88,},Annotations:map[string]string{io.kubernetes.container.hash: 417ebc8e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d11719a9acc33d103ef80e2733ee1670f165bed1ff2cd380ea146adb4108d3,PodSa
ndboxId:f57d9e72c6a553c974b433a084000bd54612c82145ae876d3b5d7a027d7922e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698707758566561433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa7972a2ebffa444c853a35cc9c002,},Annotations:map[string]string{io.kubernetes.container.hash: bab8556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8212fe614e898e9aa5ddeeba0419d08af6a77a84631657d932c892086cae34b1,PodSandboxId:f6b2ca5d13643628f34ae8cc6451cd1c5afb9e1
05ce390d206ecc54cbae14c9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698707757522825097,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80911df7b5bcbdec61c606d9828533ab7e4472fe773990b05816fabcab70a798,PodSandboxId:baccba62ceada04c10800773fc5837ff8147a6148b8fd
55a322415d33bda6314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698707757191291628,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c76e30472cef3ba2ed39a13e736f304fc35553f0c03e54fd681500be87c29f,PodSandboxId:99e420e6a324d68
14ecc3068ae0c7b13701fddee129de96a731a8d3060b88b85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698707756921266044,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-371910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baec6d501b85c0f34ab73a18f30a0a68,},Annotations:map[string]string{io.kubernetes.container.hash: 60f75551,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6e5f289f-f5a6-440d-a5e2-07c467f351f4 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2dde7504c8b3       gcr.io/google-samples/hello-app@sha256:9f3072b59865e9203c931b3ba3e8ae52e1ca3002cff46ed204bda9d37850420d            8 seconds ago       Running             hello-world-app           0                   627ab25f8bdf4       hello-world-app-5f5d8b66bb-hn27z
	fabc2e13a2347       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   8bcb3a05e2fd2       nginx
	f4df0d2bb0fca       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   a85d28d1a136d       ingress-nginx-controller-7fcf777cb7-9fjnr
	c9fb964698413       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   546713bede30c       ingress-nginx-admission-patch-gs2xm
	c41c3cd51a1be       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   1baab7f5bb917       ingress-nginx-admission-create-2c47f
	fd6003accb0dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       1                   2e957ea0da8b3       storage-provisioner
	2f98eb200a062       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   2e957ea0da8b3       storage-provisioner
	e52968681754e       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   8d04062b93fec       kube-proxy-vwtpq
	064479fc4639b       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   c5710ec36416a       coredns-66bff467f8-8btxz
	52d11719a9acc       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   f57d9e72c6a55       etcd-ingress-addon-legacy-371910
	8212fe614e898       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   f6b2ca5d13643       kube-scheduler-ingress-addon-legacy-371910
	80911df7b5bcb       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   baccba62ceada       kube-controller-manager-ingress-addon-legacy-371910
	69c76e30472ce       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   99e420e6a324d       kube-apiserver-ingress-addon-legacy-371910
	
	* 
	* ==> coredns [064479fc4639b872801a4a37bd78167704f9b0df3079c3c01beccd10b0a4289d] <==
	* [INFO] 10.244.0.5:54755 - 32330 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040723s
	[INFO] 10.244.0.5:36301 - 17470 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071039s
	[INFO] 10.244.0.5:54755 - 50652 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037641s
	[INFO] 10.244.0.5:36301 - 1470 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079683s
	[INFO] 10.244.0.5:54755 - 26651 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027564s
	[INFO] 10.244.0.5:36301 - 28742 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075866s
	[INFO] 10.244.0.5:54755 - 35967 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025746s
	[INFO] 10.244.0.5:36301 - 10173 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059302s
	[INFO] 10.244.0.5:54755 - 28630 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003441s
	[INFO] 10.244.0.5:36301 - 38552 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104793s
	[INFO] 10.244.0.5:54755 - 60475 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039379s
	[INFO] 10.244.0.5:52549 - 27672 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105698s
	[INFO] 10.244.0.5:60557 - 28824 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075522s
	[INFO] 10.244.0.5:52549 - 9537 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00007859s
	[INFO] 10.244.0.5:52549 - 38549 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068061s
	[INFO] 10.244.0.5:60557 - 8439 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054234s
	[INFO] 10.244.0.5:52549 - 46585 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067399s
	[INFO] 10.244.0.5:60557 - 6974 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066725s
	[INFO] 10.244.0.5:52549 - 26823 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061764s
	[INFO] 10.244.0.5:60557 - 14039 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055464s
	[INFO] 10.244.0.5:52549 - 27591 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000112635s
	[INFO] 10.244.0.5:60557 - 36687 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065192s
	[INFO] 10.244.0.5:52549 - 16827 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000091031s
	[INFO] 10.244.0.5:60557 - 54962 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063146s
	[INFO] 10.244.0.5:60557 - 18643 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116714s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-371910
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-371910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=ingress-addon-legacy-371910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_30T23_16_05_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:16:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-371910
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Oct 2023 23:19:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Oct 2023 23:17:45 +0000   Mon, 30 Oct 2023 23:15:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Oct 2023 23:17:45 +0000   Mon, 30 Oct 2023 23:15:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Oct 2023 23:17:45 +0000   Mon, 30 Oct 2023 23:15:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Oct 2023 23:17:45 +0000   Mon, 30 Oct 2023 23:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ingress-addon-legacy-371910
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab805725ea764653a1ecc0158b065a7b
	  System UUID:                ab805725-ea76-4653-a1ec-c0158b065a7b
	  Boot ID:                    d785da66-c537-4f8c-9422-0547cc804a45
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-hn27z                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-66bff467f8-8btxz                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m34s
	  kube-system                 etcd-ingress-addon-legacy-371910                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-apiserver-ingress-addon-legacy-371910             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-371910    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-vwtpq                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-scheduler-ingress-addon-legacy-371910             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m49s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s  kubelet     Node ingress-addon-legacy-371910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s  kubelet     Node ingress-addon-legacy-371910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s  kubelet     Node ingress-addon-legacy-371910 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m39s  kubelet     Node ingress-addon-legacy-371910 status is now: NodeReady
	  Normal  Starting                 3m32s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct30 23:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092330] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.456618] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.376542] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151708] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.042450] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.389684] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.094936] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.135614] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.099499] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.206605] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +7.356621] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +2.634263] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct30 23:16] systemd-fstab-generator[1422]: Ignoring "noauto" for root device
	[ +15.801637] kauditd_printk_skb: 6 callbacks suppressed
	[ +32.364202] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.853907] kauditd_printk_skb: 10 callbacks suppressed
	[Oct30 23:17] kauditd_printk_skb: 3 callbacks suppressed
	[Oct30 23:19] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.954526] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [52d11719a9acc33d103ef80e2733ee1670f165bed1ff2cd380ea146adb4108d3] <==
	* raft2023/10/30 23:15:58 INFO: 9759e6b18ded37f5 switched to configuration voters=(10906001622919100405)
	2023-10-30 23:15:58.729266 W | auth: simple token is not cryptographically signed
	2023-10-30 23:15:58.733019 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-30 23:15:58.735202 I | etcdserver: 9759e6b18ded37f5 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/30 23:15:58 INFO: 9759e6b18ded37f5 switched to configuration voters=(10906001622919100405)
	2023-10-30 23:15:58.735548 I | etcdserver/membership: added member 9759e6b18ded37f5 [https://192.168.39.84:2380] to cluster 5f38fc1d36b986e7
	2023-10-30 23:15:58.736377 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-30 23:15:58.736473 I | embed: listening for peers on 192.168.39.84:2380
	2023-10-30 23:15:58.736537 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/30 23:15:58 INFO: 9759e6b18ded37f5 is starting a new election at term 1
	raft2023/10/30 23:15:58 INFO: 9759e6b18ded37f5 became candidate at term 2
	raft2023/10/30 23:15:58 INFO: 9759e6b18ded37f5 received MsgVoteResp from 9759e6b18ded37f5 at term 2
	raft2023/10/30 23:15:58 INFO: 9759e6b18ded37f5 became leader at term 2
	raft2023/10/30 23:15:58 INFO: raft.node: 9759e6b18ded37f5 elected leader 9759e6b18ded37f5 at term 2
	2023-10-30 23:15:58.822083 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-30 23:15:58.823065 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-30 23:15:58.823133 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-30 23:15:58.823153 I | etcdserver: published {Name:ingress-addon-legacy-371910 ClientURLs:[https://192.168.39.84:2379]} to cluster 5f38fc1d36b986e7
	2023-10-30 23:15:58.823271 I | embed: ready to serve client requests
	2023-10-30 23:15:58.823977 I | embed: ready to serve client requests
	2023-10-30 23:15:58.824668 I | embed: serving client requests on 192.168.39.84:2379
	2023-10-30 23:15:58.826908 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-30 23:16:20.586502 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-371910\" " with result "range_response_count:1 size:6302" took too long (108.997509ms) to execute
	2023-10-30 23:16:22.022759 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-8btxz\" " with result "range_response_count:1 size:4277" took too long (222.610631ms) to execute
	2023-10-30 23:16:22.022927 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-8btxz\" " with result "range_response_count:1 size:4277" took too long (315.099044ms) to execute
	
	* 
	* ==> kernel <==
	*  23:19:54 up 4 min,  0 users,  load average: 1.56, 0.64, 0.25
	Linux ingress-addon-legacy-371910 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [69c76e30472cef3ba2ed39a13e736f304fc35553f0c03e54fd681500be87c29f] <==
	* I1030 23:16:01.829436       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1030 23:16:01.829477       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1030 23:16:01.868375       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1030 23:16:01.876820       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1030 23:16:01.880092       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1030 23:16:01.880143       1 cache.go:39] Caches are synced for autoregister controller
	I1030 23:16:01.930048       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1030 23:16:02.766474       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1030 23:16:02.766640       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1030 23:16:02.771507       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1030 23:16:02.776872       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1030 23:16:02.776961       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1030 23:16:03.218225       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1030 23:16:03.268975       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1030 23:16:03.425241       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.84]
	I1030 23:16:03.426192       1 controller.go:609] quota admission added evaluator for: endpoints
	I1030 23:16:03.429631       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1030 23:16:04.142283       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1030 23:16:05.138018       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1030 23:16:05.289865       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1030 23:16:05.607389       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1030 23:16:19.626521       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1030 23:16:20.188334       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1030 23:16:55.424415       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1030 23:17:23.453313       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [80911df7b5bcbdec61c606d9828533ab7e4472fe773990b05816fabcab70a798] <==
	* W1030 23:16:20.116443       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-371910. Assuming now as a timestamp.
	I1030 23:16:20.116494       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I1030 23:16:20.116519       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-371910", UID:"4d134102-332c-4b3d-bb49-08db78fc7549", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-371910 event: Registered Node ingress-addon-legacy-371910 in Controller
	I1030 23:16:20.116537       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1030 23:16:20.119069       1 shared_informer.go:230] Caches are synced for resource quota 
	I1030 23:16:20.143558       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1030 23:16:20.143646       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1030 23:16:20.161373       1 shared_informer.go:230] Caches are synced for resource quota 
	E1030 23:16:20.175996       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I1030 23:16:20.185077       1 shared_informer.go:230] Caches are synced for deployment 
	I1030 23:16:20.188783       1 shared_informer.go:230] Caches are synced for disruption 
	I1030 23:16:20.188849       1 disruption.go:339] Sending events to api server.
	I1030 23:16:20.196322       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1030 23:16:20.216195       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d9b4331d-697d-4b18-a925-0936dd28f221", APIVersion:"apps/v1", ResourceVersion:"323", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	E1030 23:16:20.326620       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1030 23:16:20.330353       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2a12ec4e-4d4b-452d-8eb4-e7162609caff", APIVersion:"apps/v1", ResourceVersion:"331", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-8btxz
	I1030 23:16:55.398511       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"bd9a0c9a-e075-4b1c-89ac-04ba3dbe5835", APIVersion:"apps/v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1030 23:16:55.431776       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"02a461fe-b036-466d-bb9c-05455d6113a3", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-9fjnr
	I1030 23:16:55.475942       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"33da6f6e-6457-4c83-a5bd-f42a33932639", APIVersion:"batch/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-2c47f
	I1030 23:16:55.535492       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6a7c7a63-6a5e-4aa9-8b08-0e8330473961", APIVersion:"batch/v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-gs2xm
	I1030 23:16:57.937001       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"33da6f6e-6457-4c83-a5bd-f42a33932639", APIVersion:"batch/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1030 23:16:58.946126       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6a7c7a63-6a5e-4aa9-8b08-0e8330473961", APIVersion:"batch/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1030 23:19:42.851313       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"3bff76c9-a30f-427b-a370-c489534a0ad9", APIVersion:"apps/v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1030 23:19:42.861555       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"7e23a5cd-11bd-4c49-a4fd-577109f63937", APIVersion:"apps/v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-hn27z
	E1030 23:19:51.534250       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-fwhht" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [e52968681754ede52dc88040083fa9420db5c37452fc4e2f66b107cc6a694ec4] <==
	* W1030 23:16:22.195606       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1030 23:16:22.207954       1 node.go:136] Successfully retrieved node IP: 192.168.39.84
	I1030 23:16:22.209617       1 server_others.go:186] Using iptables Proxier.
	I1030 23:16:22.209983       1 server.go:583] Version: v1.18.20
	I1030 23:16:22.213345       1 config.go:315] Starting service config controller
	I1030 23:16:22.213543       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1030 23:16:22.213932       1 config.go:133] Starting endpoints config controller
	I1030 23:16:22.213965       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1030 23:16:22.313980       1 shared_informer.go:230] Caches are synced for service config 
	I1030 23:16:22.314435       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [8212fe614e898e9aa5ddeeba0419d08af6a77a84631657d932c892086cae34b1] <==
	* W1030 23:16:01.844489       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 23:16:01.844497       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1030 23:16:01.844503       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1030 23:16:01.883276       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1030 23:16:01.883342       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1030 23:16:01.890432       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1030 23:16:01.890564       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1030 23:16:01.890841       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1030 23:16:01.890906       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1030 23:16:01.896138       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1030 23:16:01.896542       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1030 23:16:01.897166       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 23:16:01.898589       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1030 23:16:01.898955       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1030 23:16:01.899185       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1030 23:16:01.899238       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1030 23:16:01.899288       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1030 23:16:01.899324       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1030 23:16:01.899368       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1030 23:16:01.899409       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1030 23:16:01.901854       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 23:16:02.826301       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1030 23:16:02.927823       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 23:16:02.952867       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1030 23:16:03.490908       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-30 23:15:31 UTC, ends at Mon 2023-10-30 23:19:55 UTC. --
	Oct 30 23:17:00 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:17:00.135798    1429 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09daac45-0081-4812-b830-d0039d475eb8-ingress-nginx-admission-token-6vz7s" (OuterVolumeSpecName: "ingress-nginx-admission-token-6vz7s") pod "09daac45-0081-4812-b830-d0039d475eb8" (UID: "09daac45-0081-4812-b830-d0039d475eb8"). InnerVolumeSpecName "ingress-nginx-admission-token-6vz7s". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 30 23:17:00 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:17:00.219806    1429 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-6vz7s" (UniqueName: "kubernetes.io/secret/09daac45-0081-4812-b830-d0039d475eb8-ingress-nginx-admission-token-6vz7s") on node "ingress-addon-legacy-371910" DevicePath ""
	Oct 30 23:17:06 ingress-addon-legacy-371910 kubelet[1429]: W1030 23:17:06.864656    1429 container.go:412] Failed to create summary reader for "/kubepods/burstable/pod0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb/crio-conmon-f4df0d2bb0fca257b2fe473f8b44705695bfd26f880f22421e9cf1c0fb0ec356": none of the resources are being tracked.
	Oct 30 23:17:08 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:17:08.171107    1429 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 30 23:17:08 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:17:08.348547    1429 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-bm5wb" (UniqueName: "kubernetes.io/secret/d4df9e30-4600-4801-8b23-59d715e5e96a-minikube-ingress-dns-token-bm5wb") pod "kube-ingress-dns-minikube" (UID: "d4df9e30-4600-4801-8b23-59d715e5e96a")
	Oct 30 23:17:23 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:17:23.626953    1429 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 30 23:17:23 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:17:23.804905    1429 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-5jhkb" (UniqueName: "kubernetes.io/secret/11820c82-f3c8-42ca-952a-6f62808c5557-default-token-5jhkb") pod "nginx" (UID: "11820c82-f3c8-42ca-952a-6f62808c5557")
	Oct 30 23:19:42 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:42.875461    1429 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 30 23:19:43 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:43.068500    1429 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-5jhkb" (UniqueName: "kubernetes.io/secret/fc0ca5a9-f37e-4405-8a87-5adb3483ae0c-default-token-5jhkb") pod "hello-world-app-5f5d8b66bb-hn27z" (UID: "fc0ca5a9-f37e-4405-8a87-5adb3483ae0c")
	Oct 30 23:19:44 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:44.806364    1429 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5689753f0ba6eea26c88a7d52fc0495335603f3c82801dcfc7294ef38220eaa1
	Oct 30 23:19:44 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:44.978600    1429 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-bm5wb" (UniqueName: "kubernetes.io/secret/d4df9e30-4600-4801-8b23-59d715e5e96a-minikube-ingress-dns-token-bm5wb") pod "d4df9e30-4600-4801-8b23-59d715e5e96a" (UID: "d4df9e30-4600-4801-8b23-59d715e5e96a")
	Oct 30 23:19:44 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:44.986158    1429 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4df9e30-4600-4801-8b23-59d715e5e96a-minikube-ingress-dns-token-bm5wb" (OuterVolumeSpecName: "minikube-ingress-dns-token-bm5wb") pod "d4df9e30-4600-4801-8b23-59d715e5e96a" (UID: "d4df9e30-4600-4801-8b23-59d715e5e96a"). InnerVolumeSpecName "minikube-ingress-dns-token-bm5wb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 30 23:19:45 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:45.002919    1429 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5689753f0ba6eea26c88a7d52fc0495335603f3c82801dcfc7294ef38220eaa1
	Oct 30 23:19:45 ingress-addon-legacy-371910 kubelet[1429]: E1030 23:19:45.003519    1429 remote_runtime.go:295] ContainerStatus "5689753f0ba6eea26c88a7d52fc0495335603f3c82801dcfc7294ef38220eaa1" from runtime service failed: rpc error: code = NotFound desc = could not find container "5689753f0ba6eea26c88a7d52fc0495335603f3c82801dcfc7294ef38220eaa1": container with ID starting with 5689753f0ba6eea26c88a7d52fc0495335603f3c82801dcfc7294ef38220eaa1 not found: ID does not exist
	Oct 30 23:19:45 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:45.079061    1429 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-bm5wb" (UniqueName: "kubernetes.io/secret/d4df9e30-4600-4801-8b23-59d715e5e96a-minikube-ingress-dns-token-bm5wb") on node "ingress-addon-legacy-371910" DevicePath ""
	Oct 30 23:19:46 ingress-addon-legacy-371910 kubelet[1429]: E1030 23:19:46.937672    1429 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9fjnr.179305e9fa4c4358", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9fjnr", UID:"0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb", APIVersion:"v1", ResourceVersion:"436", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-371910"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1482c64b7a60f58, ext:221877717154, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1482c64b7a60f58, ext:221877717154, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9fjnr.179305e9fa4c4358" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 30 23:19:46 ingress-addon-legacy-371910 kubelet[1429]: E1030 23:19:46.969000    1429 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9fjnr.179305e9fa4c4358", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9fjnr", UID:"0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb", APIVersion:"v1", ResourceVersion:"436", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-371910"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1482c64b7a60f58, ext:221877717154, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1482c64b9443cf7, ext:221904860738, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9fjnr.179305e9fa4c4358" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 30 23:19:49 ingress-addon-legacy-371910 kubelet[1429]: W1030 23:19:49.832489    1429 pod_container_deletor.go:77] Container "a85d28d1a136dd2ae901a70bad3c7f808b1e827c499e29b40ee2b7812e7906dc" not found in pod's containers
	Oct 30 23:19:51 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:51.101334    1429 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb-webhook-cert") pod "0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb" (UID: "0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb")
	Oct 30 23:19:51 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:51.101376    1429 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-228r4" (UniqueName: "kubernetes.io/secret/0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb-ingress-nginx-token-228r4") pod "0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb" (UID: "0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb")
	Oct 30 23:19:51 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:51.106121    1429 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb-ingress-nginx-token-228r4" (OuterVolumeSpecName: "ingress-nginx-token-228r4") pod "0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb" (UID: "0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb"). InnerVolumeSpecName "ingress-nginx-token-228r4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 30 23:19:51 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:51.106537    1429 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb" (UID: "0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 30 23:19:51 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:51.201623    1429 reconciler.go:319] Volume detached for volume "ingress-nginx-token-228r4" (UniqueName: "kubernetes.io/secret/0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb-ingress-nginx-token-228r4") on node "ingress-addon-legacy-371910" DevicePath ""
	Oct 30 23:19:51 ingress-addon-legacy-371910 kubelet[1429]: I1030 23:19:51.201647    1429 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb-webhook-cert") on node "ingress-addon-legacy-371910" DevicePath ""
	Oct 30 23:19:51 ingress-addon-legacy-371910 kubelet[1429]: W1030 23:19:51.656236    1429 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/0c5ab9ed-49bd-4d3b-82cf-a02a72d0c3eb/volumes" does not exist
	
	* 
	* ==> storage-provisioner [2f98eb200a062bdf880b0ab7d033f95117c14acab7de4514a0157e0c1818c1f5] <==
	* I1030 23:16:22.196938       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1030 23:16:52.202380       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [fd6003accb0dd337cf1a06c2d26c1aec194d706ee8f510b75a4ddcda01b8c635] <==
	* I1030 23:16:52.919127       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1030 23:16:52.929263       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1030 23:16:52.930351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1030 23:16:52.945799       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1030 23:16:52.945954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-371910_f7a7a402-710c-49aa-9dbe-bc8edba2d2f2!
	I1030 23:16:52.947191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a66c0862-b690-4eb7-a06b-361bb7253fb4", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-371910_f7a7a402-710c-49aa-9dbe-bc8edba2d2f2 became leader
	I1030 23:16:53.048800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-371910_f7a7a402-710c-49aa-9dbe-bc8edba2d2f2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-371910 -n ingress-addon-legacy-371910
E1030 23:19:55.545374  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-371910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (167.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-4t8fk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-4t8fk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-4t8fk -- sh -c "ping -c 1 192.168.39.1": exit status 1 (197.073078ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-4t8fk): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-7hhs5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-7hhs5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-7hhs5 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (187.986002ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-7hhs5): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-370491 -n multinode-370491
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-370491 logs -n 25: (1.360779016s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | mount-start-2-330887 ssh -- ls                    | mount-start-2-330887 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:24 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| ssh     | mount-start-2-330887 ssh --                       | mount-start-2-330887 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:24 UTC |
	|         | mount | grep 9p                                   |                      |         |                |                     |                     |
	| stop    | -p mount-start-2-330887                           | mount-start-2-330887 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:24 UTC |
	| start   | -p mount-start-2-330887                           | mount-start-2-330887 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:24 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-330887 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC |                     |
	|         | --profile mount-start-2-330887                    |                      |         |                |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |                |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |                |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |                |                     |                     |
	| ssh     | mount-start-2-330887 ssh -- ls                    | mount-start-2-330887 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:24 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| ssh     | mount-start-2-330887 ssh --                       | mount-start-2-330887 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:24 UTC |
	|         | mount | grep 9p                                   |                      |         |                |                     |                     |
	| delete  | -p mount-start-2-330887                           | mount-start-2-330887 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:24 UTC |
	| delete  | -p mount-start-1-315410                           | mount-start-1-315410 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:24 UTC |
	| start   | -p multinode-370491                               | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:24 UTC | 30 Oct 23 23:26 UTC |
	|         | --wait=true --memory=2200                         |                      |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |                |                     |                     |
	|         | --alsologtostderr                                 |                      |         |                |                     |                     |
	|         | --driver=kvm2                                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio                          |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- apply -f                   | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- rollout                    | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | status deployment/busybox                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- get pods -o                | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- get pods -o                | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | busybox-5bc68d56bd-4t8fk --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | busybox-5bc68d56bd-7hhs5 --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | busybox-5bc68d56bd-4t8fk --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | busybox-5bc68d56bd-7hhs5 --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | busybox-5bc68d56bd-4t8fk -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | busybox-5bc68d56bd-7hhs5 -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- get pods -o                | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | busybox-5bc68d56bd-4t8fk                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC |                     |
	|         | busybox-5bc68d56bd-4t8fk -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC | 30 Oct 23 23:26 UTC |
	|         | busybox-5bc68d56bd-7hhs5                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-370491 -- exec                       | multinode-370491     | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:26 UTC |                     |
	|         | busybox-5bc68d56bd-7hhs5 -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/30 23:24:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 23:24:51.483539  229016 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:24:51.483690  229016 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:24:51.483704  229016 out.go:309] Setting ErrFile to fd 2...
	I1030 23:24:51.483712  229016 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:24:51.483889  229016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:24:51.484463  229016 out.go:303] Setting JSON to false
	I1030 23:24:51.485436  229016 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25643,"bootTime":1698682648,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:24:51.485499  229016 start.go:138] virtualization: kvm guest
	I1030 23:24:51.488033  229016 out.go:177] * [multinode-370491] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:24:51.489988  229016 out.go:177]   - MINIKUBE_LOCATION=17527
	I1030 23:24:51.491526  229016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:24:51.489990  229016 notify.go:220] Checking for updates...
	I1030 23:24:51.494125  229016 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:24:51.495574  229016 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:24:51.497133  229016 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 23:24:51.498578  229016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 23:24:51.500220  229016 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:24:51.534787  229016 out.go:177] * Using the kvm2 driver based on user configuration
	I1030 23:24:51.536441  229016 start.go:298] selected driver: kvm2
	I1030 23:24:51.536472  229016 start.go:902] validating driver "kvm2" against <nil>
	I1030 23:24:51.536489  229016 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 23:24:51.537263  229016 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:24:51.537376  229016 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 23:24:51.552092  229016 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1030 23:24:51.552145  229016 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1030 23:24:51.552434  229016 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 23:24:51.552500  229016 cni.go:84] Creating CNI manager for ""
	I1030 23:24:51.552515  229016 cni.go:136] 0 nodes found, recommending kindnet
	I1030 23:24:51.552530  229016 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1030 23:24:51.552543  229016 start_flags.go:323] config:
	{Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:24:51.552709  229016 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:24:51.554660  229016 out.go:177] * Starting control plane node multinode-370491 in cluster multinode-370491
	I1030 23:24:51.556039  229016 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:24:51.556088  229016 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1030 23:24:51.556099  229016 cache.go:56] Caching tarball of preloaded images
	I1030 23:24:51.556181  229016 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 23:24:51.556194  229016 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1030 23:24:51.556541  229016 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:24:51.556571  229016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json: {Name:mka72f7ae7c54c8f5e9715847ce1365375d9467d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:24:51.556746  229016 start.go:365] acquiring machines lock for multinode-370491: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:24:51.556784  229016 start.go:369] acquired machines lock for "multinode-370491" in 22.134µs
	I1030 23:24:51.556806  229016 start.go:93] Provisioning new machine with config: &{Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 23:24:51.556871  229016 start.go:125] createHost starting for "" (driver="kvm2")
	I1030 23:24:51.558659  229016 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 23:24:51.558819  229016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:24:51.558884  229016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:24:51.573756  229016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I1030 23:24:51.574196  229016 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:24:51.574812  229016 main.go:141] libmachine: Using API Version  1
	I1030 23:24:51.574842  229016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:24:51.575181  229016 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:24:51.575444  229016 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:24:51.575609  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:24:51.575812  229016 start.go:159] libmachine.API.Create for "multinode-370491" (driver="kvm2")
	I1030 23:24:51.575854  229016 client.go:168] LocalClient.Create starting
	I1030 23:24:51.575902  229016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem
	I1030 23:24:51.575942  229016 main.go:141] libmachine: Decoding PEM data...
	I1030 23:24:51.575972  229016 main.go:141] libmachine: Parsing certificate...
	I1030 23:24:51.576039  229016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem
	I1030 23:24:51.576066  229016 main.go:141] libmachine: Decoding PEM data...
	I1030 23:24:51.576086  229016 main.go:141] libmachine: Parsing certificate...
	I1030 23:24:51.576115  229016 main.go:141] libmachine: Running pre-create checks...
	I1030 23:24:51.576153  229016 main.go:141] libmachine: (multinode-370491) Calling .PreCreateCheck
	I1030 23:24:51.576599  229016 main.go:141] libmachine: (multinode-370491) Calling .GetConfigRaw
	I1030 23:24:51.577046  229016 main.go:141] libmachine: Creating machine...
	I1030 23:24:51.577063  229016 main.go:141] libmachine: (multinode-370491) Calling .Create
	I1030 23:24:51.577242  229016 main.go:141] libmachine: (multinode-370491) Creating KVM machine...
	I1030 23:24:51.578665  229016 main.go:141] libmachine: (multinode-370491) DBG | found existing default KVM network
	I1030 23:24:51.579473  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:51.579321  229040 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014740}
	I1030 23:24:51.584795  229016 main.go:141] libmachine: (multinode-370491) DBG | trying to create private KVM network mk-multinode-370491 192.168.39.0/24...
	I1030 23:24:51.656842  229016 main.go:141] libmachine: (multinode-370491) DBG | private KVM network mk-multinode-370491 192.168.39.0/24 created
	I1030 23:24:51.656879  229016 main.go:141] libmachine: (multinode-370491) Setting up store path in /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491 ...
	I1030 23:24:51.656908  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:51.656808  229040 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:24:51.656927  229016 main.go:141] libmachine: (multinode-370491) Building disk image from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso
	I1030 23:24:51.657025  229016 main.go:141] libmachine: (multinode-370491) Downloading /home/jenkins/minikube-integration/17527-208817/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso...
	I1030 23:24:51.909998  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:51.909844  229040 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa...
	I1030 23:24:52.048408  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:52.048274  229040 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/multinode-370491.rawdisk...
	I1030 23:24:52.048444  229016 main.go:141] libmachine: (multinode-370491) DBG | Writing magic tar header
	I1030 23:24:52.048460  229016 main.go:141] libmachine: (multinode-370491) DBG | Writing SSH key tar header
	I1030 23:24:52.048468  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:52.048400  229040 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491 ...
	I1030 23:24:52.048537  229016 main.go:141] libmachine: (multinode-370491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491
	I1030 23:24:52.048565  229016 main.go:141] libmachine: (multinode-370491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines
	I1030 23:24:52.048592  229016 main.go:141] libmachine: (multinode-370491) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491 (perms=drwx------)
	I1030 23:24:52.048607  229016 main.go:141] libmachine: (multinode-370491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:24:52.048629  229016 main.go:141] libmachine: (multinode-370491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817
	I1030 23:24:52.048644  229016 main.go:141] libmachine: (multinode-370491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 23:24:52.048659  229016 main.go:141] libmachine: (multinode-370491) DBG | Checking permissions on dir: /home/jenkins
	I1030 23:24:52.048672  229016 main.go:141] libmachine: (multinode-370491) DBG | Checking permissions on dir: /home
	I1030 23:24:52.048685  229016 main.go:141] libmachine: (multinode-370491) DBG | Skipping /home - not owner
	I1030 23:24:52.048698  229016 main.go:141] libmachine: (multinode-370491) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines (perms=drwxr-xr-x)
	I1030 23:24:52.048718  229016 main.go:141] libmachine: (multinode-370491) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube (perms=drwxr-xr-x)
	I1030 23:24:52.048734  229016 main.go:141] libmachine: (multinode-370491) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817 (perms=drwxrwxr-x)
	I1030 23:24:52.048750  229016 main.go:141] libmachine: (multinode-370491) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 23:24:52.048764  229016 main.go:141] libmachine: (multinode-370491) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 23:24:52.048779  229016 main.go:141] libmachine: (multinode-370491) Creating domain...
	I1030 23:24:52.049888  229016 main.go:141] libmachine: (multinode-370491) define libvirt domain using xml: 
	I1030 23:24:52.049911  229016 main.go:141] libmachine: (multinode-370491) <domain type='kvm'>
	I1030 23:24:52.049919  229016 main.go:141] libmachine: (multinode-370491)   <name>multinode-370491</name>
	I1030 23:24:52.049925  229016 main.go:141] libmachine: (multinode-370491)   <memory unit='MiB'>2200</memory>
	I1030 23:24:52.049931  229016 main.go:141] libmachine: (multinode-370491)   <vcpu>2</vcpu>
	I1030 23:24:52.049936  229016 main.go:141] libmachine: (multinode-370491)   <features>
	I1030 23:24:52.049942  229016 main.go:141] libmachine: (multinode-370491)     <acpi/>
	I1030 23:24:52.049947  229016 main.go:141] libmachine: (multinode-370491)     <apic/>
	I1030 23:24:52.049952  229016 main.go:141] libmachine: (multinode-370491)     <pae/>
	I1030 23:24:52.049974  229016 main.go:141] libmachine: (multinode-370491)     
	I1030 23:24:52.050018  229016 main.go:141] libmachine: (multinode-370491)   </features>
	I1030 23:24:52.050025  229016 main.go:141] libmachine: (multinode-370491)   <cpu mode='host-passthrough'>
	I1030 23:24:52.050033  229016 main.go:141] libmachine: (multinode-370491)   
	I1030 23:24:52.050045  229016 main.go:141] libmachine: (multinode-370491)   </cpu>
	I1030 23:24:52.050053  229016 main.go:141] libmachine: (multinode-370491)   <os>
	I1030 23:24:52.050063  229016 main.go:141] libmachine: (multinode-370491)     <type>hvm</type>
	I1030 23:24:52.050070  229016 main.go:141] libmachine: (multinode-370491)     <boot dev='cdrom'/>
	I1030 23:24:52.050076  229016 main.go:141] libmachine: (multinode-370491)     <boot dev='hd'/>
	I1030 23:24:52.050082  229016 main.go:141] libmachine: (multinode-370491)     <bootmenu enable='no'/>
	I1030 23:24:52.050116  229016 main.go:141] libmachine: (multinode-370491)   </os>
	I1030 23:24:52.050146  229016 main.go:141] libmachine: (multinode-370491)   <devices>
	I1030 23:24:52.050159  229016 main.go:141] libmachine: (multinode-370491)     <disk type='file' device='cdrom'>
	I1030 23:24:52.050171  229016 main.go:141] libmachine: (multinode-370491)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/boot2docker.iso'/>
	I1030 23:24:52.050187  229016 main.go:141] libmachine: (multinode-370491)       <target dev='hdc' bus='scsi'/>
	I1030 23:24:52.050196  229016 main.go:141] libmachine: (multinode-370491)       <readonly/>
	I1030 23:24:52.050210  229016 main.go:141] libmachine: (multinode-370491)     </disk>
	I1030 23:24:52.050222  229016 main.go:141] libmachine: (multinode-370491)     <disk type='file' device='disk'>
	I1030 23:24:52.050239  229016 main.go:141] libmachine: (multinode-370491)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 23:24:52.050260  229016 main.go:141] libmachine: (multinode-370491)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/multinode-370491.rawdisk'/>
	I1030 23:24:52.050275  229016 main.go:141] libmachine: (multinode-370491)       <target dev='hda' bus='virtio'/>
	I1030 23:24:52.050287  229016 main.go:141] libmachine: (multinode-370491)     </disk>
	I1030 23:24:52.050297  229016 main.go:141] libmachine: (multinode-370491)     <interface type='network'>
	I1030 23:24:52.050311  229016 main.go:141] libmachine: (multinode-370491)       <source network='mk-multinode-370491'/>
	I1030 23:24:52.050324  229016 main.go:141] libmachine: (multinode-370491)       <model type='virtio'/>
	I1030 23:24:52.050337  229016 main.go:141] libmachine: (multinode-370491)     </interface>
	I1030 23:24:52.050374  229016 main.go:141] libmachine: (multinode-370491)     <interface type='network'>
	I1030 23:24:52.050402  229016 main.go:141] libmachine: (multinode-370491)       <source network='default'/>
	I1030 23:24:52.050428  229016 main.go:141] libmachine: (multinode-370491)       <model type='virtio'/>
	I1030 23:24:52.050502  229016 main.go:141] libmachine: (multinode-370491)     </interface>
	I1030 23:24:52.050540  229016 main.go:141] libmachine: (multinode-370491)     <serial type='pty'>
	I1030 23:24:52.050559  229016 main.go:141] libmachine: (multinode-370491)       <target port='0'/>
	I1030 23:24:52.050572  229016 main.go:141] libmachine: (multinode-370491)     </serial>
	I1030 23:24:52.050585  229016 main.go:141] libmachine: (multinode-370491)     <console type='pty'>
	I1030 23:24:52.050603  229016 main.go:141] libmachine: (multinode-370491)       <target type='serial' port='0'/>
	I1030 23:24:52.050614  229016 main.go:141] libmachine: (multinode-370491)     </console>
	I1030 23:24:52.050628  229016 main.go:141] libmachine: (multinode-370491)     <rng model='virtio'>
	I1030 23:24:52.050641  229016 main.go:141] libmachine: (multinode-370491)       <backend model='random'>/dev/random</backend>
	I1030 23:24:52.050649  229016 main.go:141] libmachine: (multinode-370491)     </rng>
	I1030 23:24:52.050659  229016 main.go:141] libmachine: (multinode-370491)     
	I1030 23:24:52.050680  229016 main.go:141] libmachine: (multinode-370491)     
	I1030 23:24:52.050694  229016 main.go:141] libmachine: (multinode-370491)   </devices>
	I1030 23:24:52.050709  229016 main.go:141] libmachine: (multinode-370491) </domain>
	I1030 23:24:52.050725  229016 main.go:141] libmachine: (multinode-370491) 
	I1030 23:24:52.054681  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:d5:b1:a4 in network default
	I1030 23:24:52.055286  229016 main.go:141] libmachine: (multinode-370491) Ensuring networks are active...
	I1030 23:24:52.055316  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:52.055937  229016 main.go:141] libmachine: (multinode-370491) Ensuring network default is active
	I1030 23:24:52.056219  229016 main.go:141] libmachine: (multinode-370491) Ensuring network mk-multinode-370491 is active
	I1030 23:24:52.056750  229016 main.go:141] libmachine: (multinode-370491) Getting domain xml...
	I1030 23:24:52.057580  229016 main.go:141] libmachine: (multinode-370491) Creating domain...
	I1030 23:24:53.270034  229016 main.go:141] libmachine: (multinode-370491) Waiting to get IP...
	I1030 23:24:53.270828  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:53.271179  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:53.271209  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:53.271154  229040 retry.go:31] will retry after 273.181297ms: waiting for machine to come up
	I1030 23:24:53.547031  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:53.547458  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:53.547486  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:53.547430  229040 retry.go:31] will retry after 322.505695ms: waiting for machine to come up
	I1030 23:24:53.871942  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:53.872443  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:53.872472  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:53.872355  229040 retry.go:31] will retry after 399.117871ms: waiting for machine to come up
	I1030 23:24:54.272605  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:54.273158  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:54.273190  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:54.273095  229040 retry.go:31] will retry after 563.30987ms: waiting for machine to come up
	I1030 23:24:54.837931  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:54.838457  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:54.838488  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:54.838422  229040 retry.go:31] will retry after 641.400706ms: waiting for machine to come up
	I1030 23:24:55.481201  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:55.481694  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:55.481728  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:55.481653  229040 retry.go:31] will retry after 924.447334ms: waiting for machine to come up
	I1030 23:24:56.407434  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:56.407785  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:56.407817  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:56.407719  229040 retry.go:31] will retry after 940.128162ms: waiting for machine to come up
	I1030 23:24:57.349353  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:57.349732  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:57.349765  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:57.349668  229040 retry.go:31] will retry after 905.947043ms: waiting for machine to come up
	I1030 23:24:58.256655  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:58.257031  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:58.257063  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:58.256979  229040 retry.go:31] will retry after 1.421913378s: waiting for machine to come up
	I1030 23:24:59.680791  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:24:59.681266  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:24:59.681299  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:24:59.681205  229040 retry.go:31] will retry after 1.561628148s: waiting for machine to come up
	I1030 23:25:01.245158  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:01.245638  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:25:01.245666  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:25:01.245562  229040 retry.go:31] will retry after 2.150733471s: waiting for machine to come up
	I1030 23:25:03.399017  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:03.399458  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:25:03.399491  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:25:03.399399  229040 retry.go:31] will retry after 2.345819365s: waiting for machine to come up
	I1030 23:25:05.746752  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:05.747177  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:25:05.747211  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:25:05.747124  229040 retry.go:31] will retry after 3.818236164s: waiting for machine to come up
	I1030 23:25:09.568609  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:09.569050  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:25:09.569076  229016 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:25:09.569004  229040 retry.go:31] will retry after 4.561009623s: waiting for machine to come up
	I1030 23:25:14.133551  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.134039  229016 main.go:141] libmachine: (multinode-370491) Found IP for machine: 192.168.39.231
	I1030 23:25:14.134065  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has current primary IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.134072  229016 main.go:141] libmachine: (multinode-370491) Reserving static IP address...
	I1030 23:25:14.134533  229016 main.go:141] libmachine: (multinode-370491) DBG | unable to find host DHCP lease matching {name: "multinode-370491", mac: "52:54:00:40:7c:a3", ip: "192.168.39.231"} in network mk-multinode-370491
	I1030 23:25:14.207311  229016 main.go:141] libmachine: (multinode-370491) Reserved static IP address: 192.168.39.231
	I1030 23:25:14.207351  229016 main.go:141] libmachine: (multinode-370491) DBG | Getting to WaitForSSH function...
	I1030 23:25:14.207370  229016 main.go:141] libmachine: (multinode-370491) Waiting for SSH to be available...
	I1030 23:25:14.210220  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.210616  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:14.210655  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.210818  229016 main.go:141] libmachine: (multinode-370491) DBG | Using SSH client type: external
	I1030 23:25:14.210852  229016 main.go:141] libmachine: (multinode-370491) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa (-rw-------)
	I1030 23:25:14.210906  229016 main.go:141] libmachine: (multinode-370491) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 23:25:14.210937  229016 main.go:141] libmachine: (multinode-370491) DBG | About to run SSH command:
	I1030 23:25:14.210953  229016 main.go:141] libmachine: (multinode-370491) DBG | exit 0
	I1030 23:25:14.308532  229016 main.go:141] libmachine: (multinode-370491) DBG | SSH cmd err, output: <nil>: 
	I1030 23:25:14.308791  229016 main.go:141] libmachine: (multinode-370491) KVM machine creation complete!
	I1030 23:25:14.309187  229016 main.go:141] libmachine: (multinode-370491) Calling .GetConfigRaw
	I1030 23:25:14.309767  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:14.309977  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:14.310104  229016 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 23:25:14.310116  229016 main.go:141] libmachine: (multinode-370491) Calling .GetState
	I1030 23:25:14.311365  229016 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 23:25:14.311395  229016 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 23:25:14.311403  229016 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 23:25:14.311409  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:14.313817  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.314289  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:14.314321  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.314472  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:14.314671  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:14.314815  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:14.314953  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:14.315140  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:25:14.315532  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:25:14.315547  229016 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 23:25:14.436366  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:25:14.436399  229016 main.go:141] libmachine: Detecting the provisioner...
	I1030 23:25:14.436415  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:14.439507  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.440017  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:14.440053  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.440229  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:14.440449  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:14.440635  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:14.440818  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:14.440971  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:25:14.441323  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:25:14.441341  229016 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 23:25:14.561514  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gea8740b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1030 23:25:14.561580  229016 main.go:141] libmachine: found compatible host: buildroot
	I1030 23:25:14.561587  229016 main.go:141] libmachine: Provisioning with buildroot...
	I1030 23:25:14.561598  229016 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:25:14.561873  229016 buildroot.go:166] provisioning hostname "multinode-370491"
	I1030 23:25:14.561921  229016 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:25:14.562149  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:14.565043  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.565372  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:14.565395  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.565556  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:14.565746  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:14.565960  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:14.566115  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:14.566367  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:25:14.566683  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:25:14.566697  229016 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-370491 && echo "multinode-370491" | sudo tee /etc/hostname
	I1030 23:25:14.702704  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370491
	
	I1030 23:25:14.702735  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:14.705336  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.705697  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:14.705731  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.705902  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:14.706107  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:14.706298  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:14.706457  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:14.706655  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:25:14.707124  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:25:14.707150  229016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-370491' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-370491/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-370491' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 23:25:14.838374  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:25:14.838405  229016 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1030 23:25:14.838444  229016 buildroot.go:174] setting up certificates
	I1030 23:25:14.838455  229016 provision.go:83] configureAuth start
	I1030 23:25:14.838468  229016 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:25:14.838751  229016 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:25:14.841384  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.841733  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:14.841763  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.841928  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:14.844257  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.844580  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:14.844612  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:14.844765  229016 provision.go:138] copyHostCerts
	I1030 23:25:14.844809  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:25:14.844845  229016 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1030 23:25:14.844862  229016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:25:14.844919  229016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1030 23:25:14.845033  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:25:14.845052  229016 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1030 23:25:14.845059  229016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:25:14.845080  229016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1030 23:25:14.845147  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:25:14.845166  229016 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1030 23:25:14.845172  229016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:25:14.845189  229016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1030 23:25:14.845242  229016 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.multinode-370491 san=[192.168.39.231 192.168.39.231 localhost 127.0.0.1 minikube multinode-370491]
	I1030 23:25:15.001632  229016 provision.go:172] copyRemoteCerts
	I1030 23:25:15.001701  229016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 23:25:15.001762  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:15.004439  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.004770  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.004791  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.005007  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:15.005218  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:15.005416  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:15.005600  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:25:15.094460  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 23:25:15.094547  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1030 23:25:15.118048  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 23:25:15.118134  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1030 23:25:15.140574  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 23:25:15.140656  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 23:25:15.163841  229016 provision.go:86] duration metric: configureAuth took 325.367771ms
	I1030 23:25:15.163874  229016 buildroot.go:189] setting minikube options for container-runtime
	I1030 23:25:15.164082  229016 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:25:15.164199  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:15.167127  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.167456  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.167495  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.167728  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:15.167949  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:15.168135  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:15.168333  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:15.168519  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:25:15.168841  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:25:15.168856  229016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 23:25:15.478467  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 23:25:15.478527  229016 main.go:141] libmachine: Checking connection to Docker...
	I1030 23:25:15.478543  229016 main.go:141] libmachine: (multinode-370491) Calling .GetURL
	I1030 23:25:15.479974  229016 main.go:141] libmachine: (multinode-370491) DBG | Using libvirt version 6000000
	I1030 23:25:15.482046  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.482293  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.482326  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.482527  229016 main.go:141] libmachine: Docker is up and running!
	I1030 23:25:15.482548  229016 main.go:141] libmachine: Reticulating splines...
	I1030 23:25:15.482557  229016 client.go:171] LocalClient.Create took 23.906690303s
	I1030 23:25:15.482586  229016 start.go:167] duration metric: libmachine.API.Create for "multinode-370491" took 23.906777009s
	I1030 23:25:15.482609  229016 start.go:300] post-start starting for "multinode-370491" (driver="kvm2")
	I1030 23:25:15.482618  229016 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 23:25:15.482632  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:15.482937  229016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 23:25:15.482975  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:15.485424  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.485752  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.485778  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.485980  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:15.486217  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:15.486384  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:15.486550  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:25:15.578897  229016 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 23:25:15.582742  229016 command_runner.go:130] > NAME=Buildroot
	I1030 23:25:15.582770  229016 command_runner.go:130] > VERSION=2021.02.12-1-gea8740b-dirty
	I1030 23:25:15.582777  229016 command_runner.go:130] > ID=buildroot
	I1030 23:25:15.582786  229016 command_runner.go:130] > VERSION_ID=2021.02.12
	I1030 23:25:15.582793  229016 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1030 23:25:15.582842  229016 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 23:25:15.582859  229016 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1030 23:25:15.582921  229016 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1030 23:25:15.582999  229016 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1030 23:25:15.583011  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /etc/ssl/certs/2160052.pem
	I1030 23:25:15.583130  229016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 23:25:15.592017  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:25:15.613161  229016 start.go:303] post-start completed in 130.535209ms
	I1030 23:25:15.613222  229016 main.go:141] libmachine: (multinode-370491) Calling .GetConfigRaw
	I1030 23:25:15.613873  229016 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:25:15.616619  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.616968  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.617003  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.617218  229016 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:25:15.617432  229016 start.go:128] duration metric: createHost completed in 24.060549495s
	I1030 23:25:15.617462  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:15.620021  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.620363  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.620390  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.620561  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:15.620763  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:15.620933  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:15.621123  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:15.621322  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:25:15.621790  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:25:15.621808  229016 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1030 23:25:15.741635  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698708315.710827527
	
	I1030 23:25:15.741662  229016 fix.go:206] guest clock: 1698708315.710827527
	I1030 23:25:15.741669  229016 fix.go:219] Guest: 2023-10-30 23:25:15.710827527 +0000 UTC Remote: 2023-10-30 23:25:15.617446235 +0000 UTC m=+24.183023971 (delta=93.381292ms)
	I1030 23:25:15.741689  229016 fix.go:190] guest clock delta is within tolerance: 93.381292ms
	I1030 23:25:15.741694  229016 start.go:83] releasing machines lock for "multinode-370491", held for 24.184901322s
	I1030 23:25:15.741713  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:15.741979  229016 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:25:15.744731  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.745182  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.745215  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.745374  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:15.745879  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:15.746063  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:15.746184  229016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 23:25:15.746228  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:15.746295  229016 ssh_runner.go:195] Run: cat /version.json
	I1030 23:25:15.746324  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:15.748966  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.749274  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.749316  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.749335  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.749501  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:15.749691  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:15.749795  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:15.749828  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:15.749859  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:15.750040  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:25:15.750105  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:15.750274  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:15.750445  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:15.750593  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:25:15.858543  229016 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1030 23:25:15.858612  229016 command_runner.go:130] > {"iso_version": "v1.32.0-1698684775-17527", "kicbase_version": "v0.0.41-1698660445-17527", "minikube_version": "v1.32.0-beta.0", "commit": "4c1f451320d1a77675b9eefd8e846c23ac017af4"}
	I1030 23:25:15.858783  229016 ssh_runner.go:195] Run: systemctl --version
	I1030 23:25:15.863742  229016 command_runner.go:130] > systemd 247 (247)
	I1030 23:25:15.863779  229016 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1030 23:25:15.864043  229016 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 23:25:16.019875  229016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1030 23:25:16.025499  229016 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1030 23:25:16.025661  229016 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 23:25:16.025739  229016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 23:25:16.041370  229016 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1030 23:25:16.041695  229016 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 23:25:16.041712  229016 start.go:472] detecting cgroup driver to use...
	I1030 23:25:16.041763  229016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 23:25:16.059055  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 23:25:16.071459  229016 docker.go:198] disabling cri-docker service (if available) ...
	I1030 23:25:16.071503  229016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 23:25:16.084245  229016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 23:25:16.097010  229016 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 23:25:16.111105  229016 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1030 23:25:16.209177  229016 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 23:25:16.318737  229016 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1030 23:25:16.318778  229016 docker.go:214] disabling docker service ...
	I1030 23:25:16.318844  229016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 23:25:16.331450  229016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 23:25:16.341856  229016 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1030 23:25:16.342721  229016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 23:25:16.355331  229016 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1030 23:25:16.446171  229016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 23:25:16.547838  229016 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1030 23:25:16.547865  229016 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1030 23:25:16.547938  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 23:25:16.560485  229016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 23:25:16.577103  229016 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1030 23:25:16.577491  229016 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1030 23:25:16.577549  229016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:25:16.586906  229016 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 23:25:16.586992  229016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:25:16.595885  229016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:25:16.604665  229016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:25:16.613705  229016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 23:25:16.623198  229016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 23:25:16.630941  229016 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 23:25:16.630991  229016 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 23:25:16.631036  229016 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 23:25:16.642461  229016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 23:25:16.651941  229016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 23:25:16.748803  229016 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 23:25:16.913003  229016 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 23:25:16.913085  229016 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 23:25:16.917731  229016 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1030 23:25:16.917757  229016 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1030 23:25:16.917768  229016 command_runner.go:130] > Device: 16h/22d	Inode: 724         Links: 1
	I1030 23:25:16.917779  229016 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:25:16.917784  229016 command_runner.go:130] > Access: 2023-10-30 23:25:16.869171986 +0000
	I1030 23:25:16.917791  229016 command_runner.go:130] > Modify: 2023-10-30 23:25:16.869171986 +0000
	I1030 23:25:16.917796  229016 command_runner.go:130] > Change: 2023-10-30 23:25:16.869171986 +0000
	I1030 23:25:16.917804  229016 command_runner.go:130] >  Birth: -
	I1030 23:25:16.917825  229016 start.go:540] Will wait 60s for crictl version
	I1030 23:25:16.917869  229016 ssh_runner.go:195] Run: which crictl
	I1030 23:25:16.921460  229016 command_runner.go:130] > /usr/bin/crictl
	I1030 23:25:16.921535  229016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 23:25:16.957782  229016 command_runner.go:130] > Version:  0.1.0
	I1030 23:25:16.957810  229016 command_runner.go:130] > RuntimeName:  cri-o
	I1030 23:25:16.957818  229016 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1030 23:25:16.957848  229016 command_runner.go:130] > RuntimeApiVersion:  v1
	I1030 23:25:16.959234  229016 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1030 23:25:16.959320  229016 ssh_runner.go:195] Run: crio --version
	I1030 23:25:17.001949  229016 command_runner.go:130] > crio version 1.24.1
	I1030 23:25:17.001974  229016 command_runner.go:130] > Version:          1.24.1
	I1030 23:25:17.002000  229016 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:25:17.002005  229016 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:25:17.002012  229016 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:25:17.002018  229016 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:25:17.002025  229016 command_runner.go:130] > Compiler:         gc
	I1030 23:25:17.002035  229016 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:25:17.002048  229016 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:25:17.002063  229016 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:25:17.002067  229016 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:25:17.002071  229016 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:25:17.002152  229016 ssh_runner.go:195] Run: crio --version
	I1030 23:25:17.049580  229016 command_runner.go:130] > crio version 1.24.1
	I1030 23:25:17.049610  229016 command_runner.go:130] > Version:          1.24.1
	I1030 23:25:17.049631  229016 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:25:17.049638  229016 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:25:17.049665  229016 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:25:17.049673  229016 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:25:17.049678  229016 command_runner.go:130] > Compiler:         gc
	I1030 23:25:17.049683  229016 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:25:17.049688  229016 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:25:17.049696  229016 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:25:17.049703  229016 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:25:17.049708  229016 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:25:17.053035  229016 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1030 23:25:17.054423  229016 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:25:17.057183  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:17.057518  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:17.057560  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:17.057728  229016 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 23:25:17.061897  229016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:25:17.074441  229016 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:25:17.074507  229016 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 23:25:17.106585  229016 command_runner.go:130] > {
	I1030 23:25:17.106614  229016 command_runner.go:130] >   "images": [
	I1030 23:25:17.106621  229016 command_runner.go:130] >   ]
	I1030 23:25:17.106627  229016 command_runner.go:130] > }
	I1030 23:25:17.107761  229016 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1030 23:25:17.107823  229016 ssh_runner.go:195] Run: which lz4
	I1030 23:25:17.111799  229016 command_runner.go:130] > /usr/bin/lz4
	I1030 23:25:17.111833  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1030 23:25:17.111917  229016 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1030 23:25:17.115970  229016 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 23:25:17.116004  229016 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 23:25:17.116025  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1030 23:25:18.836897  229016 crio.go:444] Took 1.725000 seconds to copy over tarball
	I1030 23:25:18.837010  229016 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 23:25:21.830122  229016 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.993081542s)
	I1030 23:25:21.830151  229016 crio.go:451] Took 2.993219 seconds to extract the tarball
	I1030 23:25:21.830160  229016 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 23:25:21.872541  229016 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 23:25:21.953770  229016 command_runner.go:130] > {
	I1030 23:25:21.953799  229016 command_runner.go:130] >   "images": [
	I1030 23:25:21.953806  229016 command_runner.go:130] >     {
	I1030 23:25:21.953815  229016 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1030 23:25:21.953819  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.953825  229016 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1030 23:25:21.953829  229016 command_runner.go:130] >       ],
	I1030 23:25:21.953833  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.953841  229016 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1030 23:25:21.953848  229016 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1030 23:25:21.953852  229016 command_runner.go:130] >       ],
	I1030 23:25:21.953856  229016 command_runner.go:130] >       "size": "65258016",
	I1030 23:25:21.953860  229016 command_runner.go:130] >       "uid": null,
	I1030 23:25:21.953864  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.953873  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.953877  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.953880  229016 command_runner.go:130] >     },
	I1030 23:25:21.953884  229016 command_runner.go:130] >     {
	I1030 23:25:21.953890  229016 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1030 23:25:21.953894  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.953903  229016 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1030 23:25:21.953907  229016 command_runner.go:130] >       ],
	I1030 23:25:21.953911  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.953918  229016 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1030 23:25:21.953929  229016 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1030 23:25:21.953933  229016 command_runner.go:130] >       ],
	I1030 23:25:21.953966  229016 command_runner.go:130] >       "size": "31470524",
	I1030 23:25:21.953979  229016 command_runner.go:130] >       "uid": null,
	I1030 23:25:21.953983  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.953987  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.953991  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.953995  229016 command_runner.go:130] >     },
	I1030 23:25:21.953999  229016 command_runner.go:130] >     {
	I1030 23:25:21.954005  229016 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1030 23:25:21.954010  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.954015  229016 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1030 23:25:21.954021  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954026  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.954036  229016 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1030 23:25:21.954046  229016 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1030 23:25:21.954052  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954057  229016 command_runner.go:130] >       "size": "53621675",
	I1030 23:25:21.954064  229016 command_runner.go:130] >       "uid": null,
	I1030 23:25:21.954068  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.954075  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.954079  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.954083  229016 command_runner.go:130] >     },
	I1030 23:25:21.954087  229016 command_runner.go:130] >     {
	I1030 23:25:21.954095  229016 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1030 23:25:21.954100  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.954105  229016 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1030 23:25:21.954115  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954122  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.954129  229016 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1030 23:25:21.954138  229016 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1030 23:25:21.954149  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954158  229016 command_runner.go:130] >       "size": "295456551",
	I1030 23:25:21.954163  229016 command_runner.go:130] >       "uid": {
	I1030 23:25:21.954169  229016 command_runner.go:130] >         "value": "0"
	I1030 23:25:21.954174  229016 command_runner.go:130] >       },
	I1030 23:25:21.954178  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.954182  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.954189  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.954193  229016 command_runner.go:130] >     },
	I1030 23:25:21.954196  229016 command_runner.go:130] >     {
	I1030 23:25:21.954202  229016 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1030 23:25:21.954206  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.954212  229016 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1030 23:25:21.954218  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954222  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.954229  229016 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1030 23:25:21.954239  229016 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1030 23:25:21.954244  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954249  229016 command_runner.go:130] >       "size": "127165392",
	I1030 23:25:21.954257  229016 command_runner.go:130] >       "uid": {
	I1030 23:25:21.954262  229016 command_runner.go:130] >         "value": "0"
	I1030 23:25:21.954267  229016 command_runner.go:130] >       },
	I1030 23:25:21.954271  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.954278  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.954283  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.954288  229016 command_runner.go:130] >     },
	I1030 23:25:21.954292  229016 command_runner.go:130] >     {
	I1030 23:25:21.954298  229016 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1030 23:25:21.954308  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.954316  229016 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1030 23:25:21.954320  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954326  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.954334  229016 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1030 23:25:21.954344  229016 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1030 23:25:21.954350  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954354  229016 command_runner.go:130] >       "size": "123188534",
	I1030 23:25:21.954358  229016 command_runner.go:130] >       "uid": {
	I1030 23:25:21.954365  229016 command_runner.go:130] >         "value": "0"
	I1030 23:25:21.954371  229016 command_runner.go:130] >       },
	I1030 23:25:21.954376  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.954382  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.954386  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.954390  229016 command_runner.go:130] >     },
	I1030 23:25:21.954396  229016 command_runner.go:130] >     {
	I1030 23:25:21.954402  229016 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1030 23:25:21.954408  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.954413  229016 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1030 23:25:21.954418  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954422  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.954430  229016 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1030 23:25:21.954438  229016 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1030 23:25:21.954443  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954452  229016 command_runner.go:130] >       "size": "74691991",
	I1030 23:25:21.954459  229016 command_runner.go:130] >       "uid": null,
	I1030 23:25:21.954463  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.954472  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.954479  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.954482  229016 command_runner.go:130] >     },
	I1030 23:25:21.954486  229016 command_runner.go:130] >     {
	I1030 23:25:21.954492  229016 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1030 23:25:21.954498  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.954503  229016 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1030 23:25:21.954509  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954513  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.954536  229016 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1030 23:25:21.954547  229016 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1030 23:25:21.954550  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954554  229016 command_runner.go:130] >       "size": "61498678",
	I1030 23:25:21.954558  229016 command_runner.go:130] >       "uid": {
	I1030 23:25:21.954563  229016 command_runner.go:130] >         "value": "0"
	I1030 23:25:21.954567  229016 command_runner.go:130] >       },
	I1030 23:25:21.954573  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.954577  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.954584  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.954590  229016 command_runner.go:130] >     },
	I1030 23:25:21.954594  229016 command_runner.go:130] >     {
	I1030 23:25:21.954600  229016 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1030 23:25:21.954605  229016 command_runner.go:130] >       "repoTags": [
	I1030 23:25:21.954609  229016 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1030 23:25:21.954616  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954620  229016 command_runner.go:130] >       "repoDigests": [
	I1030 23:25:21.954628  229016 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1030 23:25:21.954635  229016 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1030 23:25:21.954640  229016 command_runner.go:130] >       ],
	I1030 23:25:21.954645  229016 command_runner.go:130] >       "size": "750414",
	I1030 23:25:21.954651  229016 command_runner.go:130] >       "uid": {
	I1030 23:25:21.954655  229016 command_runner.go:130] >         "value": "65535"
	I1030 23:25:21.954661  229016 command_runner.go:130] >       },
	I1030 23:25:21.954666  229016 command_runner.go:130] >       "username": "",
	I1030 23:25:21.954672  229016 command_runner.go:130] >       "spec": null,
	I1030 23:25:21.954676  229016 command_runner.go:130] >       "pinned": false
	I1030 23:25:21.954682  229016 command_runner.go:130] >     }
	I1030 23:25:21.954688  229016 command_runner.go:130] >   ]
	I1030 23:25:21.954691  229016 command_runner.go:130] > }
	I1030 23:25:21.954813  229016 crio.go:496] all images are preloaded for cri-o runtime.
	I1030 23:25:21.954826  229016 cache_images.go:84] Images are preloaded, skipping loading
	I1030 23:25:21.954907  229016 ssh_runner.go:195] Run: crio config
	I1030 23:25:22.015248  229016 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1030 23:25:22.015281  229016 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1030 23:25:22.015293  229016 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1030 23:25:22.015299  229016 command_runner.go:130] > #
	I1030 23:25:22.015309  229016 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1030 23:25:22.015319  229016 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1030 23:25:22.015329  229016 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1030 23:25:22.015355  229016 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1030 23:25:22.015364  229016 command_runner.go:130] > # reload'.
	I1030 23:25:22.015374  229016 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1030 23:25:22.015391  229016 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1030 23:25:22.015416  229016 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1030 23:25:22.015431  229016 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1030 23:25:22.015440  229016 command_runner.go:130] > [crio]
	I1030 23:25:22.015453  229016 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1030 23:25:22.015465  229016 command_runner.go:130] > # containers images, in this directory.
	I1030 23:25:22.015477  229016 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1030 23:25:22.015496  229016 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1030 23:25:22.015509  229016 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1030 23:25:22.015523  229016 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1030 23:25:22.015533  229016 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1030 23:25:22.015545  229016 command_runner.go:130] > storage_driver = "overlay"
	I1030 23:25:22.015563  229016 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1030 23:25:22.015585  229016 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1030 23:25:22.015596  229016 command_runner.go:130] > storage_option = [
	I1030 23:25:22.015608  229016 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1030 23:25:22.015613  229016 command_runner.go:130] > ]
	I1030 23:25:22.015624  229016 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1030 23:25:22.015637  229016 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1030 23:25:22.015652  229016 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1030 23:25:22.015665  229016 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1030 23:25:22.015678  229016 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1030 23:25:22.015688  229016 command_runner.go:130] > # always happen on a node reboot
	I1030 23:25:22.015696  229016 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1030 23:25:22.015708  229016 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1030 23:25:22.015722  229016 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1030 23:25:22.015746  229016 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1030 23:25:22.015764  229016 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1030 23:25:22.015778  229016 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1030 23:25:22.015797  229016 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1030 23:25:22.015840  229016 command_runner.go:130] > # internal_wipe = true
	I1030 23:25:22.015850  229016 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1030 23:25:22.015861  229016 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1030 23:25:22.015870  229016 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1030 23:25:22.015879  229016 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1030 23:25:22.015889  229016 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1030 23:25:22.015897  229016 command_runner.go:130] > [crio.api]
	I1030 23:25:22.015915  229016 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1030 23:25:22.015928  229016 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1030 23:25:22.015942  229016 command_runner.go:130] > # IP address on which the stream server will listen.
	I1030 23:25:22.015953  229016 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1030 23:25:22.015975  229016 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1030 23:25:22.015988  229016 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1030 23:25:22.015998  229016 command_runner.go:130] > # stream_port = "0"
	I1030 23:25:22.016009  229016 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1030 23:25:22.016020  229016 command_runner.go:130] > # stream_enable_tls = false
	I1030 23:25:22.016035  229016 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1030 23:25:22.016046  229016 command_runner.go:130] > # stream_idle_timeout = ""
	I1030 23:25:22.016057  229016 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1030 23:25:22.016071  229016 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1030 23:25:22.016078  229016 command_runner.go:130] > # minutes.
	I1030 23:25:22.016088  229016 command_runner.go:130] > # stream_tls_cert = ""
	I1030 23:25:22.016098  229016 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1030 23:25:22.016110  229016 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1030 23:25:22.016120  229016 command_runner.go:130] > # stream_tls_key = ""
	I1030 23:25:22.016134  229016 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1030 23:25:22.016166  229016 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1030 23:25:22.016176  229016 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1030 23:25:22.016184  229016 command_runner.go:130] > # stream_tls_ca = ""
	I1030 23:25:22.016197  229016 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:25:22.016206  229016 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1030 23:25:22.016220  229016 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:25:22.016232  229016 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1030 23:25:22.016264  229016 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1030 23:25:22.016276  229016 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1030 23:25:22.016285  229016 command_runner.go:130] > [crio.runtime]
	I1030 23:25:22.016299  229016 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1030 23:25:22.016309  229016 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1030 23:25:22.016319  229016 command_runner.go:130] > # "nofile=1024:2048"
	I1030 23:25:22.016329  229016 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1030 23:25:22.016338  229016 command_runner.go:130] > # default_ulimits = [
	I1030 23:25:22.016344  229016 command_runner.go:130] > # ]
	I1030 23:25:22.016357  229016 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1030 23:25:22.016373  229016 command_runner.go:130] > # no_pivot = false
	I1030 23:25:22.016387  229016 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1030 23:25:22.016400  229016 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1030 23:25:22.016409  229016 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1030 23:25:22.016419  229016 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1030 23:25:22.016430  229016 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1030 23:25:22.016444  229016 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:25:22.016456  229016 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1030 23:25:22.016467  229016 command_runner.go:130] > # Cgroup setting for conmon
	I1030 23:25:22.016481  229016 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1030 23:25:22.016492  229016 command_runner.go:130] > conmon_cgroup = "pod"
	I1030 23:25:22.016504  229016 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1030 23:25:22.016517  229016 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1030 23:25:22.016532  229016 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:25:22.016544  229016 command_runner.go:130] > conmon_env = [
	I1030 23:25:22.016558  229016 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 23:25:22.016567  229016 command_runner.go:130] > ]
	I1030 23:25:22.016586  229016 command_runner.go:130] > # Additional environment variables to set for all the
	I1030 23:25:22.016598  229016 command_runner.go:130] > # containers. These are overridden if set in the
	I1030 23:25:22.016611  229016 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1030 23:25:22.016622  229016 command_runner.go:130] > # default_env = [
	I1030 23:25:22.016632  229016 command_runner.go:130] > # ]
	I1030 23:25:22.016643  229016 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1030 23:25:22.016653  229016 command_runner.go:130] > # selinux = false
	I1030 23:25:22.016667  229016 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1030 23:25:22.016682  229016 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1030 23:25:22.016695  229016 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1030 23:25:22.016704  229016 command_runner.go:130] > # seccomp_profile = ""
	I1030 23:25:22.016720  229016 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1030 23:25:22.016734  229016 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1030 23:25:22.016749  229016 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1030 23:25:22.016756  229016 command_runner.go:130] > # which might increase security.
	I1030 23:25:22.016764  229016 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1030 23:25:22.016777  229016 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1030 23:25:22.016788  229016 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1030 23:25:22.016839  229016 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1030 23:25:22.016861  229016 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1030 23:25:22.016871  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:25:22.016879  229016 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1030 23:25:22.016893  229016 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1030 23:25:22.016904  229016 command_runner.go:130] > # the cgroup blockio controller.
	I1030 23:25:22.016913  229016 command_runner.go:130] > # blockio_config_file = ""
	I1030 23:25:22.016929  229016 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1030 23:25:22.016950  229016 command_runner.go:130] > # irqbalance daemon.
	I1030 23:25:22.016963  229016 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1030 23:25:22.016974  229016 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1030 23:25:22.016987  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:25:22.016998  229016 command_runner.go:130] > # rdt_config_file = ""
	I1030 23:25:22.017012  229016 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1030 23:25:22.017021  229016 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1030 23:25:22.017036  229016 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1030 23:25:22.017047  229016 command_runner.go:130] > # separate_pull_cgroup = ""
	I1030 23:25:22.017060  229016 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1030 23:25:22.017073  229016 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1030 23:25:22.017095  229016 command_runner.go:130] > # will be added.
	I1030 23:25:22.017103  229016 command_runner.go:130] > # default_capabilities = [
	I1030 23:25:22.017110  229016 command_runner.go:130] > # 	"CHOWN",
	I1030 23:25:22.017117  229016 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1030 23:25:22.017124  229016 command_runner.go:130] > # 	"FSETID",
	I1030 23:25:22.017131  229016 command_runner.go:130] > # 	"FOWNER",
	I1030 23:25:22.017138  229016 command_runner.go:130] > # 	"SETGID",
	I1030 23:25:22.017146  229016 command_runner.go:130] > # 	"SETUID",
	I1030 23:25:22.017153  229016 command_runner.go:130] > # 	"SETPCAP",
	I1030 23:25:22.017161  229016 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1030 23:25:22.017167  229016 command_runner.go:130] > # 	"KILL",
	I1030 23:25:22.017173  229016 command_runner.go:130] > # ]
	I1030 23:25:22.017183  229016 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1030 23:25:22.017194  229016 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:25:22.017203  229016 command_runner.go:130] > # default_sysctls = [
	I1030 23:25:22.017209  229016 command_runner.go:130] > # ]
	I1030 23:25:22.017221  229016 command_runner.go:130] > # List of devices on the host that a
	I1030 23:25:22.017230  229016 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1030 23:25:22.017241  229016 command_runner.go:130] > # allowed_devices = [
	I1030 23:25:22.017248  229016 command_runner.go:130] > # 	"/dev/fuse",
	I1030 23:25:22.017253  229016 command_runner.go:130] > # ]
	I1030 23:25:22.017260  229016 command_runner.go:130] > # List of additional devices. specified as
	I1030 23:25:22.017276  229016 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1030 23:25:22.017286  229016 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1030 23:25:22.017369  229016 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:25:22.017388  229016 command_runner.go:130] > # additional_devices = [
	I1030 23:25:22.017395  229016 command_runner.go:130] > # ]
	I1030 23:25:22.017404  229016 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1030 23:25:22.017413  229016 command_runner.go:130] > # cdi_spec_dirs = [
	I1030 23:25:22.017420  229016 command_runner.go:130] > # 	"/etc/cdi",
	I1030 23:25:22.017430  229016 command_runner.go:130] > # 	"/var/run/cdi",
	I1030 23:25:22.017435  229016 command_runner.go:130] > # ]
	I1030 23:25:22.017448  229016 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1030 23:25:22.017461  229016 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1030 23:25:22.017468  229016 command_runner.go:130] > # Defaults to false.
	I1030 23:25:22.017478  229016 command_runner.go:130] > # device_ownership_from_security_context = false
	I1030 23:25:22.017496  229016 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1030 23:25:22.017508  229016 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1030 23:25:22.017518  229016 command_runner.go:130] > # hooks_dir = [
	I1030 23:25:22.017525  229016 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1030 23:25:22.017534  229016 command_runner.go:130] > # ]
	I1030 23:25:22.017544  229016 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1030 23:25:22.017557  229016 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1030 23:25:22.017566  229016 command_runner.go:130] > # its default mounts from the following two files:
	I1030 23:25:22.017580  229016 command_runner.go:130] > #
	I1030 23:25:22.017596  229016 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1030 23:25:22.017610  229016 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1030 23:25:22.017619  229016 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1030 23:25:22.017628  229016 command_runner.go:130] > #
	I1030 23:25:22.017638  229016 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1030 23:25:22.017652  229016 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1030 23:25:22.017662  229016 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1030 23:25:22.017674  229016 command_runner.go:130] > #      only add mounts it finds in this file.
	I1030 23:25:22.017683  229016 command_runner.go:130] > #
	I1030 23:25:22.017694  229016 command_runner.go:130] > # default_mounts_file = ""
	I1030 23:25:22.017706  229016 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1030 23:25:22.017717  229016 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1030 23:25:22.017739  229016 command_runner.go:130] > pids_limit = 1024
	I1030 23:25:22.017754  229016 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1030 23:25:22.017765  229016 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1030 23:25:22.017779  229016 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1030 23:25:22.017791  229016 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1030 23:25:22.017799  229016 command_runner.go:130] > # log_size_max = -1
	I1030 23:25:22.017806  229016 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1030 23:25:22.017811  229016 command_runner.go:130] > # log_to_journald = false
	I1030 23:25:22.017817  229016 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1030 23:25:22.017824  229016 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1030 23:25:22.017829  229016 command_runner.go:130] > # Path to directory for container attach sockets.
	I1030 23:25:22.017836  229016 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1030 23:25:22.017842  229016 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1030 23:25:22.017848  229016 command_runner.go:130] > # bind_mount_prefix = ""
	I1030 23:25:22.017884  229016 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1030 23:25:22.017894  229016 command_runner.go:130] > # read_only = false
	I1030 23:25:22.017904  229016 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1030 23:25:22.017916  229016 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1030 23:25:22.017923  229016 command_runner.go:130] > # live configuration reload.
	I1030 23:25:22.017934  229016 command_runner.go:130] > # log_level = "info"
	I1030 23:25:22.017942  229016 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1030 23:25:22.017954  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:25:22.017963  229016 command_runner.go:130] > # log_filter = ""
	I1030 23:25:22.017976  229016 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1030 23:25:22.017989  229016 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1030 23:25:22.017998  229016 command_runner.go:130] > # separated by comma.
	I1030 23:25:22.018009  229016 command_runner.go:130] > # uid_mappings = ""
	I1030 23:25:22.018018  229016 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1030 23:25:22.018032  229016 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1030 23:25:22.018041  229016 command_runner.go:130] > # separated by comma.
	I1030 23:25:22.018048  229016 command_runner.go:130] > # gid_mappings = ""
	I1030 23:25:22.018061  229016 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1030 23:25:22.018071  229016 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:25:22.018090  229016 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:25:22.018101  229016 command_runner.go:130] > # minimum_mappable_uid = -1
	I1030 23:25:22.018111  229016 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1030 23:25:22.018121  229016 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:25:22.018127  229016 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:25:22.018134  229016 command_runner.go:130] > # minimum_mappable_gid = -1
	I1030 23:25:22.018140  229016 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1030 23:25:22.018148  229016 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1030 23:25:22.018156  229016 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1030 23:25:22.018160  229016 command_runner.go:130] > # ctr_stop_timeout = 30
	I1030 23:25:22.018168  229016 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1030 23:25:22.018177  229016 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1030 23:25:22.018202  229016 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1030 23:25:22.018215  229016 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1030 23:25:22.018220  229016 command_runner.go:130] > drop_infra_ctr = false
	I1030 23:25:22.018226  229016 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1030 23:25:22.018234  229016 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1030 23:25:22.018241  229016 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1030 23:25:22.018251  229016 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1030 23:25:22.018259  229016 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1030 23:25:22.018264  229016 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1030 23:25:22.018271  229016 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1030 23:25:22.018278  229016 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1030 23:25:22.018285  229016 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1030 23:25:22.018291  229016 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 23:25:22.018300  229016 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1030 23:25:22.018307  229016 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1030 23:25:22.018314  229016 command_runner.go:130] > # default_runtime = "runc"
	I1030 23:25:22.018319  229016 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1030 23:25:22.018328  229016 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1030 23:25:22.018345  229016 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1030 23:25:22.018353  229016 command_runner.go:130] > # creation as a file is not desired either.
	I1030 23:25:22.018361  229016 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1030 23:25:22.018368  229016 command_runner.go:130] > # the hostname is being managed dynamically.
	I1030 23:25:22.018373  229016 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1030 23:25:22.018382  229016 command_runner.go:130] > # ]
	I1030 23:25:22.018391  229016 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1030 23:25:22.018399  229016 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1030 23:25:22.018408  229016 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1030 23:25:22.018414  229016 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1030 23:25:22.018420  229016 command_runner.go:130] > #
	I1030 23:25:22.018424  229016 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1030 23:25:22.018432  229016 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1030 23:25:22.018436  229016 command_runner.go:130] > #  runtime_type = "oci"
	I1030 23:25:22.018443  229016 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1030 23:25:22.018448  229016 command_runner.go:130] > #  privileged_without_host_devices = false
	I1030 23:25:22.018455  229016 command_runner.go:130] > #  allowed_annotations = []
	I1030 23:25:22.018459  229016 command_runner.go:130] > # Where:
	I1030 23:25:22.018467  229016 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1030 23:25:22.018473  229016 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1030 23:25:22.018482  229016 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1030 23:25:22.018489  229016 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1030 23:25:22.018495  229016 command_runner.go:130] > #   in $PATH.
	I1030 23:25:22.018501  229016 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1030 23:25:22.018510  229016 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1030 23:25:22.018542  229016 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1030 23:25:22.018548  229016 command_runner.go:130] > #   state.
	I1030 23:25:22.018554  229016 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1030 23:25:22.018567  229016 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1030 23:25:22.018585  229016 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1030 23:25:22.018597  229016 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1030 23:25:22.018606  229016 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1030 23:25:22.018620  229016 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1030 23:25:22.018631  229016 command_runner.go:130] > #   The currently recognized values are:
	I1030 23:25:22.018644  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1030 23:25:22.018662  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1030 23:25:22.018674  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1030 23:25:22.018682  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1030 23:25:22.018693  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1030 23:25:22.018699  229016 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1030 23:25:22.018707  229016 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1030 23:25:22.018716  229016 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1030 23:25:22.018727  229016 command_runner.go:130] > #   should be moved to the container's cgroup
	I1030 23:25:22.018734  229016 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1030 23:25:22.018738  229016 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1030 23:25:22.018745  229016 command_runner.go:130] > runtime_type = "oci"
	I1030 23:25:22.018749  229016 command_runner.go:130] > runtime_root = "/run/runc"
	I1030 23:25:22.018756  229016 command_runner.go:130] > runtime_config_path = ""
	I1030 23:25:22.018760  229016 command_runner.go:130] > monitor_path = ""
	I1030 23:25:22.018766  229016 command_runner.go:130] > monitor_cgroup = ""
	I1030 23:25:22.018770  229016 command_runner.go:130] > monitor_exec_cgroup = ""
	I1030 23:25:22.018778  229016 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1030 23:25:22.018785  229016 command_runner.go:130] > # running containers
	I1030 23:25:22.018790  229016 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1030 23:25:22.018796  229016 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1030 23:25:22.018852  229016 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1030 23:25:22.018860  229016 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1030 23:25:22.018865  229016 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1030 23:25:22.018872  229016 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1030 23:25:22.018877  229016 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1030 23:25:22.018886  229016 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1030 23:25:22.018891  229016 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1030 23:25:22.018895  229016 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1030 23:25:22.018901  229016 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1030 23:25:22.018909  229016 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1030 23:25:22.018918  229016 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1030 23:25:22.018927  229016 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1030 23:25:22.018936  229016 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1030 23:25:22.018944  229016 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1030 23:25:22.018953  229016 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1030 23:25:22.018963  229016 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1030 23:25:22.018974  229016 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1030 23:25:22.018983  229016 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1030 23:25:22.018987  229016 command_runner.go:130] > # Example:
	I1030 23:25:22.018994  229016 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1030 23:25:22.018999  229016 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1030 23:25:22.019005  229016 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1030 23:25:22.019010  229016 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1030 23:25:22.019017  229016 command_runner.go:130] > # cpuset = 0
	I1030 23:25:22.019021  229016 command_runner.go:130] > # cpushares = "0-1"
	I1030 23:25:22.019025  229016 command_runner.go:130] > # Where:
	I1030 23:25:22.019030  229016 command_runner.go:130] > # The workload name is workload-type.
	I1030 23:25:22.019036  229016 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1030 23:25:22.019042  229016 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1030 23:25:22.019047  229016 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1030 23:25:22.019055  229016 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1030 23:25:22.019065  229016 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1030 23:25:22.019071  229016 command_runner.go:130] > # 
	I1030 23:25:22.019077  229016 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1030 23:25:22.019083  229016 command_runner.go:130] > #
	I1030 23:25:22.019089  229016 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1030 23:25:22.019095  229016 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1030 23:25:22.019103  229016 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1030 23:25:22.019131  229016 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1030 23:25:22.019139  229016 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1030 23:25:22.019143  229016 command_runner.go:130] > [crio.image]
	I1030 23:25:22.019158  229016 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1030 23:25:22.019168  229016 command_runner.go:130] > # default_transport = "docker://"
	I1030 23:25:22.019179  229016 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1030 23:25:22.019192  229016 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:25:22.019202  229016 command_runner.go:130] > # global_auth_file = ""
	I1030 23:25:22.019213  229016 command_runner.go:130] > # The image used to instantiate infra containers.
	I1030 23:25:22.019224  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:25:22.019235  229016 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1030 23:25:22.019249  229016 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1030 23:25:22.019257  229016 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:25:22.019263  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:25:22.019270  229016 command_runner.go:130] > # pause_image_auth_file = ""
	I1030 23:25:22.019275  229016 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1030 23:25:22.019281  229016 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1030 23:25:22.019290  229016 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1030 23:25:22.019296  229016 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1030 23:25:22.019300  229016 command_runner.go:130] > # pause_command = "/pause"
	I1030 23:25:22.019305  229016 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1030 23:25:22.019318  229016 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1030 23:25:22.019328  229016 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1030 23:25:22.019338  229016 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1030 23:25:22.019346  229016 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1030 23:25:22.019353  229016 command_runner.go:130] > # signature_policy = ""
	I1030 23:25:22.019366  229016 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1030 23:25:22.019380  229016 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1030 23:25:22.019387  229016 command_runner.go:130] > # changing them here.
	I1030 23:25:22.019397  229016 command_runner.go:130] > # insecure_registries = [
	I1030 23:25:22.019404  229016 command_runner.go:130] > # ]
	I1030 23:25:22.019411  229016 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1030 23:25:22.019418  229016 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1030 23:25:22.019422  229016 command_runner.go:130] > # image_volumes = "mkdir"
	I1030 23:25:22.019428  229016 command_runner.go:130] > # Temporary directory to use for storing big files
	I1030 23:25:22.019433  229016 command_runner.go:130] > # big_files_temporary_dir = ""
	I1030 23:25:22.019441  229016 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1030 23:25:22.019448  229016 command_runner.go:130] > # CNI plugins.
	I1030 23:25:22.019452  229016 command_runner.go:130] > [crio.network]
	I1030 23:25:22.019463  229016 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1030 23:25:22.019471  229016 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1030 23:25:22.019478  229016 command_runner.go:130] > # cni_default_network = ""
	I1030 23:25:22.019491  229016 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1030 23:25:22.019502  229016 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1030 23:25:22.019514  229016 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1030 23:25:22.019523  229016 command_runner.go:130] > # plugin_dirs = [
	I1030 23:25:22.019533  229016 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1030 23:25:22.019540  229016 command_runner.go:130] > # ]
	I1030 23:25:22.019550  229016 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1030 23:25:22.019560  229016 command_runner.go:130] > [crio.metrics]
	I1030 23:25:22.019577  229016 command_runner.go:130] > # Globally enable or disable metrics support.
	I1030 23:25:22.019584  229016 command_runner.go:130] > enable_metrics = true
	I1030 23:25:22.019589  229016 command_runner.go:130] > # Specify enabled metrics collectors.
	I1030 23:25:22.019596  229016 command_runner.go:130] > # Per default all metrics are enabled.
	I1030 23:25:22.019602  229016 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1030 23:25:22.019613  229016 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1030 23:25:22.019621  229016 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1030 23:25:22.019630  229016 command_runner.go:130] > # metrics_collectors = [
	I1030 23:25:22.019637  229016 command_runner.go:130] > # 	"operations",
	I1030 23:25:22.019642  229016 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1030 23:25:22.019649  229016 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1030 23:25:22.019653  229016 command_runner.go:130] > # 	"operations_errors",
	I1030 23:25:22.019657  229016 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1030 23:25:22.019662  229016 command_runner.go:130] > # 	"image_pulls_by_name",
	I1030 23:25:22.019667  229016 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1030 23:25:22.019673  229016 command_runner.go:130] > # 	"image_pulls_failures",
	I1030 23:25:22.019678  229016 command_runner.go:130] > # 	"image_pulls_successes",
	I1030 23:25:22.019684  229016 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1030 23:25:22.019688  229016 command_runner.go:130] > # 	"image_layer_reuse",
	I1030 23:25:22.019695  229016 command_runner.go:130] > # 	"containers_oom_total",
	I1030 23:25:22.019699  229016 command_runner.go:130] > # 	"containers_oom",
	I1030 23:25:22.019705  229016 command_runner.go:130] > # 	"processes_defunct",
	I1030 23:25:22.019710  229016 command_runner.go:130] > # 	"operations_total",
	I1030 23:25:22.019716  229016 command_runner.go:130] > # 	"operations_latency_seconds",
	I1030 23:25:22.019721  229016 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1030 23:25:22.019730  229016 command_runner.go:130] > # 	"operations_errors_total",
	I1030 23:25:22.019737  229016 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1030 23:25:22.019741  229016 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1030 23:25:22.019746  229016 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1030 23:25:22.019751  229016 command_runner.go:130] > # 	"image_pulls_success_total",
	I1030 23:25:22.019758  229016 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1030 23:25:22.019762  229016 command_runner.go:130] > # 	"containers_oom_count_total",
	I1030 23:25:22.019768  229016 command_runner.go:130] > # ]
	I1030 23:25:22.019773  229016 command_runner.go:130] > # The port on which the metrics server will listen.
	I1030 23:25:22.019777  229016 command_runner.go:130] > # metrics_port = 9090
	I1030 23:25:22.019785  229016 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1030 23:25:22.019789  229016 command_runner.go:130] > # metrics_socket = ""
	I1030 23:25:22.019800  229016 command_runner.go:130] > # The certificate for the secure metrics server.
	I1030 23:25:22.019807  229016 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1030 23:25:22.019813  229016 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1030 23:25:22.019820  229016 command_runner.go:130] > # certificate on any modification event.
	I1030 23:25:22.019825  229016 command_runner.go:130] > # metrics_cert = ""
	I1030 23:25:22.019832  229016 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1030 23:25:22.019862  229016 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1030 23:25:22.019889  229016 command_runner.go:130] > # metrics_key = ""
	I1030 23:25:22.019909  229016 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1030 23:25:22.019918  229016 command_runner.go:130] > [crio.tracing]
	I1030 23:25:22.019928  229016 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1030 23:25:22.019945  229016 command_runner.go:130] > # enable_tracing = false
	I1030 23:25:22.019957  229016 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1030 23:25:22.019965  229016 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1030 23:25:22.019969  229016 command_runner.go:130] > # Number of samples to collect per million spans.
	I1030 23:25:22.019974  229016 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1030 23:25:22.019982  229016 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1030 23:25:22.019986  229016 command_runner.go:130] > [crio.stats]
	I1030 23:25:22.019992  229016 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1030 23:25:22.019999  229016 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1030 23:25:22.020004  229016 command_runner.go:130] > # stats_collection_period = 0
	I1030 23:25:22.020058  229016 command_runner.go:130] ! time="2023-10-30 23:25:21.990527548Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1030 23:25:22.020085  229016 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1030 23:25:22.020185  229016 cni.go:84] Creating CNI manager for ""
	I1030 23:25:22.020199  229016 cni.go:136] 1 nodes found, recommending kindnet
	I1030 23:25:22.020221  229016 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1030 23:25:22.020238  229016 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-370491 NodeName:multinode-370491 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 23:25:22.020375  229016 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-370491"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 23:25:22.020459  229016 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-370491 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1030 23:25:22.020517  229016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1030 23:25:22.029948  229016 command_runner.go:130] > kubeadm
	I1030 23:25:22.029963  229016 command_runner.go:130] > kubectl
	I1030 23:25:22.029972  229016 command_runner.go:130] > kubelet
	I1030 23:25:22.030098  229016 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 23:25:22.030167  229016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 23:25:22.039086  229016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1030 23:25:22.054630  229016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 23:25:22.070521  229016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1030 23:25:22.085951  229016 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I1030 23:25:22.090494  229016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:25:22.103071  229016 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491 for IP: 192.168.39.231
	I1030 23:25:22.103114  229016 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:22.103308  229016 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1030 23:25:22.103374  229016 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1030 23:25:22.103442  229016 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key
	I1030 23:25:22.103463  229016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt with IP's: []
	I1030 23:25:22.177487  229016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt ...
	I1030 23:25:22.177526  229016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt: {Name:mkde821bdcb45b0f3d063c6a6bc1960aaaf337ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:22.177713  229016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key ...
	I1030 23:25:22.177737  229016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key: {Name:mk1d689e37b3bcb473514ffc6f5cfd0acb31d655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:22.177867  229016 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key.cabadef2
	I1030 23:25:22.177888  229016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt.cabadef2 with IP's: [192.168.39.231 10.96.0.1 127.0.0.1 10.0.0.1]
	I1030 23:25:22.339559  229016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt.cabadef2 ...
	I1030 23:25:22.339595  229016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt.cabadef2: {Name:mkbf47b3e418f24af51796b10e3afe3cbea6423b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:22.339794  229016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key.cabadef2 ...
	I1030 23:25:22.339816  229016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key.cabadef2: {Name:mk29734c73046fa1f9031fbde25d07064377355d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:22.339920  229016 certs.go:337] copying /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt.cabadef2 -> /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt
	I1030 23:25:22.340003  229016 certs.go:341] copying /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key.cabadef2 -> /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key
	I1030 23:25:22.340086  229016 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.key
	I1030 23:25:22.340106  229016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.crt with IP's: []
	I1030 23:25:22.618130  229016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.crt ...
	I1030 23:25:22.618165  229016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.crt: {Name:mk5397666ccc848916ffc0d2d5836dd9497ceedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:22.618331  229016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.key ...
	I1030 23:25:22.618349  229016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.key: {Name:mk75004230c5a97103248c91665876e566bfe421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:22.618442  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 23:25:22.618470  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 23:25:22.618490  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 23:25:22.618516  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 23:25:22.618534  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 23:25:22.618554  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 23:25:22.618572  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 23:25:22.618591  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 23:25:22.618659  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1030 23:25:22.618704  229016 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1030 23:25:22.618721  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 23:25:22.618760  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1030 23:25:22.618806  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1030 23:25:22.618843  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1030 23:25:22.618898  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:25:22.618943  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:25:22.618969  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem -> /usr/share/ca-certificates/216005.pem
	I1030 23:25:22.618988  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /usr/share/ca-certificates/2160052.pem
	I1030 23:25:22.619640  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1030 23:25:22.643053  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 23:25:22.665666  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 23:25:22.688030  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 23:25:22.710175  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 23:25:22.732617  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 23:25:22.757037  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 23:25:22.778985  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1030 23:25:22.801081  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 23:25:22.824053  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1030 23:25:22.845692  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1030 23:25:22.867614  229016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1030 23:25:22.885090  229016 ssh_runner.go:195] Run: openssl version
	I1030 23:25:22.890115  229016 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1030 23:25:22.890364  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 23:25:22.899888  229016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:25:22.904093  229016 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:25:22.904173  229016 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:25:22.904223  229016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:25:22.909393  229016 command_runner.go:130] > b5213941
	I1030 23:25:22.909847  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 23:25:22.919176  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1030 23:25:22.928572  229016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1030 23:25:22.932706  229016 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:25:22.932791  229016 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:25:22.932842  229016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1030 23:25:22.938018  229016 command_runner.go:130] > 51391683
	I1030 23:25:22.938230  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1030 23:25:22.947234  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1030 23:25:22.956388  229016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1030 23:25:22.960522  229016 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:25:22.960647  229016 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:25:22.960734  229016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1030 23:25:22.965950  229016 command_runner.go:130] > 3ec20f2e
	I1030 23:25:22.966001  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 23:25:22.975009  229016 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1030 23:25:22.978649  229016 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:25:22.978775  229016 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:25:22.978833  229016 kubeadm.go:404] StartCluster: {Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:25:22.978993  229016 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 23:25:22.979061  229016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 23:25:23.017482  229016 cri.go:89] found id: ""
	I1030 23:25:23.017550  229016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 23:25:23.026276  229016 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1030 23:25:23.026310  229016 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1030 23:25:23.026320  229016 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1030 23:25:23.026409  229016 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 23:25:23.034624  229016 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 23:25:23.042607  229016 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1030 23:25:23.042636  229016 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1030 23:25:23.042649  229016 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1030 23:25:23.042659  229016 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 23:25:23.042692  229016 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 23:25:23.042733  229016 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1030 23:25:23.154430  229016 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1030 23:25:23.154461  229016 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1030 23:25:23.154535  229016 kubeadm.go:322] [preflight] Running pre-flight checks
	I1030 23:25:23.154542  229016 command_runner.go:130] > [preflight] Running pre-flight checks
	I1030 23:25:23.386710  229016 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 23:25:23.386759  229016 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1030 23:25:23.386928  229016 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 23:25:23.386971  229016 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1030 23:25:23.387090  229016 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 23:25:23.387100  229016 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1030 23:25:23.608144  229016 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 23:25:23.608181  229016 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 23:25:23.742851  229016 out.go:204]   - Generating certificates and keys ...
	I1030 23:25:23.742984  229016 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1030 23:25:23.743016  229016 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1030 23:25:23.743124  229016 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1030 23:25:23.743152  229016 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1030 23:25:23.743209  229016 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 23:25:23.743221  229016 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1030 23:25:23.753653  229016 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1030 23:25:23.753680  229016 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1030 23:25:24.100676  229016 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1030 23:25:24.100707  229016 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1030 23:25:24.245801  229016 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1030 23:25:24.245830  229016 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1030 23:25:24.345075  229016 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1030 23:25:24.345108  229016 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1030 23:25:24.345289  229016 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-370491] and IPs [192.168.39.231 127.0.0.1 ::1]
	I1030 23:25:24.345300  229016 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-370491] and IPs [192.168.39.231 127.0.0.1 ::1]
	I1030 23:25:24.562339  229016 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1030 23:25:24.562368  229016 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1030 23:25:24.562493  229016 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-370491] and IPs [192.168.39.231 127.0.0.1 ::1]
	I1030 23:25:24.562526  229016 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-370491] and IPs [192.168.39.231 127.0.0.1 ::1]
	I1030 23:25:24.765422  229016 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 23:25:24.765451  229016 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1030 23:25:24.918389  229016 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 23:25:24.918421  229016 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1030 23:25:24.985857  229016 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1030 23:25:24.985883  229016 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1030 23:25:24.985990  229016 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 23:25:24.986003  229016 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 23:25:25.264383  229016 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 23:25:25.264421  229016 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 23:25:25.441486  229016 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 23:25:25.441520  229016 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 23:25:25.743535  229016 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 23:25:25.743569  229016 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 23:25:25.902851  229016 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 23:25:25.902883  229016 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 23:25:25.903550  229016 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 23:25:25.903567  229016 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 23:25:25.906641  229016 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 23:25:25.908745  229016 out.go:204]   - Booting up control plane ...
	I1030 23:25:25.906717  229016 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 23:25:25.908904  229016 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 23:25:25.908949  229016 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 23:25:25.909036  229016 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 23:25:25.909049  229016 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 23:25:25.909162  229016 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 23:25:25.909186  229016 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 23:25:25.928132  229016 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 23:25:25.928158  229016 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 23:25:25.929102  229016 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 23:25:25.929117  229016 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 23:25:25.929260  229016 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1030 23:25:25.929271  229016 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1030 23:25:26.053466  229016 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 23:25:26.053527  229016 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1030 23:25:34.052640  229016 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003609 seconds
	I1030 23:25:34.052678  229016 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003609 seconds
	I1030 23:25:34.052843  229016 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 23:25:34.052866  229016 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1030 23:25:34.067956  229016 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 23:25:34.067982  229016 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1030 23:25:34.600175  229016 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1030 23:25:34.600222  229016 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1030 23:25:34.600374  229016 kubeadm.go:322] [mark-control-plane] Marking the node multinode-370491 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 23:25:34.600382  229016 command_runner.go:130] > [mark-control-plane] Marking the node multinode-370491 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1030 23:25:35.115380  229016 kubeadm.go:322] [bootstrap-token] Using token: 2be8nh.tzo7gkrfzwe6nmfh
	I1030 23:25:35.116785  229016 out.go:204]   - Configuring RBAC rules ...
	I1030 23:25:35.115441  229016 command_runner.go:130] > [bootstrap-token] Using token: 2be8nh.tzo7gkrfzwe6nmfh
	I1030 23:25:35.116898  229016 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 23:25:35.116910  229016 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1030 23:25:35.122493  229016 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 23:25:35.122530  229016 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1030 23:25:35.130450  229016 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 23:25:35.130467  229016 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1030 23:25:35.133865  229016 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 23:25:35.133889  229016 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1030 23:25:35.138476  229016 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 23:25:35.138490  229016 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1030 23:25:35.146297  229016 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 23:25:35.146312  229016 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1030 23:25:35.158127  229016 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 23:25:35.158150  229016 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1030 23:25:35.404699  229016 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1030 23:25:35.404731  229016 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1030 23:25:35.535975  229016 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1030 23:25:35.536005  229016 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1030 23:25:35.536012  229016 kubeadm.go:322] 
	I1030 23:25:35.536079  229016 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1030 23:25:35.536115  229016 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1030 23:25:35.536151  229016 kubeadm.go:322] 
	I1030 23:25:35.536266  229016 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1030 23:25:35.536281  229016 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1030 23:25:35.536288  229016 kubeadm.go:322] 
	I1030 23:25:35.536323  229016 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1030 23:25:35.536355  229016 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1030 23:25:35.536425  229016 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 23:25:35.536433  229016 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1030 23:25:35.536496  229016 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 23:25:35.536506  229016 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1030 23:25:35.536512  229016 kubeadm.go:322] 
	I1030 23:25:35.536587  229016 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1030 23:25:35.536597  229016 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1030 23:25:35.536606  229016 kubeadm.go:322] 
	I1030 23:25:35.536677  229016 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 23:25:35.536683  229016 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1030 23:25:35.536687  229016 kubeadm.go:322] 
	I1030 23:25:35.536732  229016 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1030 23:25:35.536752  229016 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1030 23:25:35.536861  229016 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 23:25:35.536889  229016 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1030 23:25:35.536996  229016 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 23:25:35.537009  229016 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1030 23:25:35.537015  229016 kubeadm.go:322] 
	I1030 23:25:35.537166  229016 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1030 23:25:35.537186  229016 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1030 23:25:35.537294  229016 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1030 23:25:35.537307  229016 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1030 23:25:35.537318  229016 kubeadm.go:322] 
	I1030 23:25:35.537398  229016 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2be8nh.tzo7gkrfzwe6nmfh \
	I1030 23:25:35.537418  229016 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2be8nh.tzo7gkrfzwe6nmfh \
	I1030 23:25:35.537560  229016 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1030 23:25:35.537575  229016 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1030 23:25:35.537601  229016 kubeadm.go:322] 	--control-plane 
	I1030 23:25:35.537612  229016 command_runner.go:130] > 	--control-plane 
	I1030 23:25:35.537617  229016 kubeadm.go:322] 
	I1030 23:25:35.537712  229016 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1030 23:25:35.537732  229016 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1030 23:25:35.537741  229016 kubeadm.go:322] 
	I1030 23:25:35.537875  229016 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2be8nh.tzo7gkrfzwe6nmfh \
	I1030 23:25:35.537894  229016 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2be8nh.tzo7gkrfzwe6nmfh \
	I1030 23:25:35.538019  229016 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1030 23:25:35.538029  229016 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1030 23:25:35.538224  229016 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 23:25:35.538246  229016 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 23:25:35.538283  229016 cni.go:84] Creating CNI manager for ""
	I1030 23:25:35.538294  229016 cni.go:136] 1 nodes found, recommending kindnet
	I1030 23:25:35.540030  229016 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1030 23:25:35.541517  229016 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 23:25:35.552595  229016 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1030 23:25:35.552619  229016 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1030 23:25:35.552627  229016 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1030 23:25:35.552638  229016 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:25:35.552652  229016 command_runner.go:130] > Access: 2023-10-30 23:25:04.975208312 +0000
	I1030 23:25:35.552664  229016 command_runner.go:130] > Modify: 2023-10-30 22:33:43.000000000 +0000
	I1030 23:25:35.552676  229016 command_runner.go:130] > Change: 2023-10-30 23:25:03.217208312 +0000
	I1030 23:25:35.552685  229016 command_runner.go:130] >  Birth: -
	I1030 23:25:35.552876  229016 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1030 23:25:35.552895  229016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1030 23:25:35.661803  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 23:25:36.757383  229016 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1030 23:25:36.757408  229016 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1030 23:25:36.757416  229016 command_runner.go:130] > serviceaccount/kindnet created
	I1030 23:25:36.757424  229016 command_runner.go:130] > daemonset.apps/kindnet created
	I1030 23:25:36.757448  229016 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.095609872s)
	I1030 23:25:36.757495  229016 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 23:25:36.757576  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:36.757597  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=multinode-370491 minikube.k8s.io/updated_at=2023_10_30T23_25_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:36.800337  229016 command_runner.go:130] > -16
	I1030 23:25:36.892739  229016 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1030 23:25:36.895346  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:36.906222  229016 ops.go:34] apiserver oom_adj: -16
	I1030 23:25:36.906311  229016 command_runner.go:130] > node/multinode-370491 labeled
	I1030 23:25:36.994644  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:36.994754  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:37.092487  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:37.593326  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:37.689377  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:38.092793  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:38.181512  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:38.593144  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:38.680315  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:39.093081  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:39.179844  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:39.593601  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:39.712916  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:40.093327  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:40.176245  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:40.592790  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:40.686155  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:41.093695  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:41.180495  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:41.593002  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:41.688235  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:42.092783  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:42.179919  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:42.592755  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:42.700161  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:43.092929  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:43.175965  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:43.593530  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:43.711874  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:44.093629  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:44.195257  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:44.592817  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:44.675328  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:45.092894  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:45.182437  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:45.593028  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:45.685254  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:46.092841  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:46.176956  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:46.593174  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:46.679682  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:47.093264  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:47.231924  229016 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1030 23:25:47.593413  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1030 23:25:47.691767  229016 command_runner.go:130] > NAME      SECRETS   AGE
	I1030 23:25:47.692486  229016 command_runner.go:130] > default   0         0s
	I1030 23:25:47.703408  229016 kubeadm.go:1081] duration metric: took 10.945897648s to wait for elevateKubeSystemPrivileges.
	I1030 23:25:47.703443  229016 kubeadm.go:406] StartCluster complete in 24.724614885s
	I1030 23:25:47.703491  229016 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:47.703586  229016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:25:47.704341  229016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:25:47.704604  229016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 23:25:47.704621  229016 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1030 23:25:47.704742  229016 addons.go:69] Setting storage-provisioner=true in profile "multinode-370491"
	I1030 23:25:47.704763  229016 addons.go:69] Setting default-storageclass=true in profile "multinode-370491"
	I1030 23:25:47.704770  229016 addons.go:231] Setting addon storage-provisioner=true in "multinode-370491"
	I1030 23:25:47.704786  229016 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-370491"
	I1030 23:25:47.704851  229016 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:25:47.704850  229016 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:25:47.704960  229016 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:25:47.705302  229016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:25:47.705348  229016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:25:47.705381  229016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:25:47.705332  229016 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:25:47.705351  229016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:25:47.706217  229016 cert_rotation.go:137] Starting client certificate rotation controller
	I1030 23:25:47.706637  229016 round_trippers.go:463] GET https://192.168.39.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1030 23:25:47.706656  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:47.706668  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:47.706678  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:47.719450  229016 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1030 23:25:47.719467  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:47.719474  229016 round_trippers.go:580]     Audit-Id: a9058662-2ab0-4367-873c-8c72efa31e3b
	I1030 23:25:47.719479  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:47.719484  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:47.719489  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:47.719494  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:47.719507  229016 round_trippers.go:580]     Content-Length: 291
	I1030 23:25:47.719514  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:47 GMT
	I1030 23:25:47.719561  229016 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d25ead-69ff-4f03-b32f-13c215a6d708","resourceVersion":"234","creationTimestamp":"2023-10-30T23:25:35Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1030 23:25:47.720122  229016 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d25ead-69ff-4f03-b32f-13c215a6d708","resourceVersion":"234","creationTimestamp":"2023-10-30T23:25:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1030 23:25:47.720207  229016 round_trippers.go:463] PUT https://192.168.39.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1030 23:25:47.720219  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:47.720230  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:47.720243  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:47.720256  229016 round_trippers.go:473]     Content-Type: application/json
	I1030 23:25:47.721930  229016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I1030 23:25:47.722336  229016 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:25:47.722850  229016 main.go:141] libmachine: Using API Version  1
	I1030 23:25:47.722871  229016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:25:47.723226  229016 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:25:47.723709  229016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:25:47.723757  229016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:25:47.725330  229016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I1030 23:25:47.725859  229016 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:25:47.726364  229016 main.go:141] libmachine: Using API Version  1
	I1030 23:25:47.726389  229016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:25:47.726786  229016 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:25:47.726998  229016 main.go:141] libmachine: (multinode-370491) Calling .GetState
	I1030 23:25:47.729502  229016 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:25:47.729826  229016 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:25:47.730141  229016 addons.go:231] Setting addon default-storageclass=true in "multinode-370491"
	I1030 23:25:47.730178  229016 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:25:47.730505  229016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:25:47.730563  229016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:25:47.733829  229016 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1030 23:25:47.733851  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:47.733863  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:47.733872  229016 round_trippers.go:580]     Content-Length: 291
	I1030 23:25:47.733882  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:47 GMT
	I1030 23:25:47.733891  229016 round_trippers.go:580]     Audit-Id: 36027e6b-c53c-4ad9-a140-e2c5f2c134cb
	I1030 23:25:47.733909  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:47.733921  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:47.733941  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:47.733971  229016 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d25ead-69ff-4f03-b32f-13c215a6d708","resourceVersion":"322","creationTimestamp":"2023-10-30T23:25:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1030 23:25:47.734162  229016 round_trippers.go:463] GET https://192.168.39.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1030 23:25:47.734182  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:47.734194  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:47.734204  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:47.738622  229016 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:25:47.738650  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:47.738662  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:47 GMT
	I1030 23:25:47.738672  229016 round_trippers.go:580]     Audit-Id: 23f2bccf-b3d3-4090-8e2d-48dda72d0973
	I1030 23:25:47.738686  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:47.738699  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:47.738713  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:47.738725  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:47.738739  229016 round_trippers.go:580]     Content-Length: 291
	I1030 23:25:47.738782  229016 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d25ead-69ff-4f03-b32f-13c215a6d708","resourceVersion":"322","creationTimestamp":"2023-10-30T23:25:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1030 23:25:47.738904  229016 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-370491" context rescaled to 1 replicas
	I1030 23:25:47.738954  229016 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 23:25:47.741551  229016 out.go:177] * Verifying Kubernetes components...
	I1030 23:25:47.739963  229016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
	I1030 23:25:47.742854  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:25:47.743260  229016 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:25:47.743819  229016 main.go:141] libmachine: Using API Version  1
	I1030 23:25:47.743845  229016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:25:47.744222  229016 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:25:47.744401  229016 main.go:141] libmachine: (multinode-370491) Calling .GetState
	I1030 23:25:47.745114  229016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I1030 23:25:47.745567  229016 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:25:47.746050  229016 main.go:141] libmachine: Using API Version  1
	I1030 23:25:47.746070  229016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:25:47.746224  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:47.747895  229016 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1030 23:25:47.746582  229016 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:25:47.749345  229016 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 23:25:47.749368  229016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1030 23:25:47.749387  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:47.749780  229016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:25:47.749838  229016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:25:47.752595  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:47.753103  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:47.753173  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:47.753390  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:47.753567  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:47.753767  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:47.753936  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:25:47.764899  229016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I1030 23:25:47.765363  229016 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:25:47.765849  229016 main.go:141] libmachine: Using API Version  1
	I1030 23:25:47.765871  229016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:25:47.766164  229016 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:25:47.766346  229016 main.go:141] libmachine: (multinode-370491) Calling .GetState
	I1030 23:25:47.768082  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:25:47.768367  229016 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1030 23:25:47.768388  229016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1030 23:25:47.768408  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:25:47.770799  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:47.771113  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:25:47.771145  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:25:47.771336  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:25:47.771571  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:25:47.771734  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:25:47.771874  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:25:47.914402  229016 command_runner.go:130] > apiVersion: v1
	I1030 23:25:47.914429  229016 command_runner.go:130] > data:
	I1030 23:25:47.914436  229016 command_runner.go:130] >   Corefile: |
	I1030 23:25:47.914441  229016 command_runner.go:130] >     .:53 {
	I1030 23:25:47.914445  229016 command_runner.go:130] >         errors
	I1030 23:25:47.914450  229016 command_runner.go:130] >         health {
	I1030 23:25:47.914455  229016 command_runner.go:130] >            lameduck 5s
	I1030 23:25:47.914458  229016 command_runner.go:130] >         }
	I1030 23:25:47.914476  229016 command_runner.go:130] >         ready
	I1030 23:25:47.914482  229016 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1030 23:25:47.914487  229016 command_runner.go:130] >            pods insecure
	I1030 23:25:47.914496  229016 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1030 23:25:47.914509  229016 command_runner.go:130] >            ttl 30
	I1030 23:25:47.914519  229016 command_runner.go:130] >         }
	I1030 23:25:47.914526  229016 command_runner.go:130] >         prometheus :9153
	I1030 23:25:47.914536  229016 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1030 23:25:47.914542  229016 command_runner.go:130] >            max_concurrent 1000
	I1030 23:25:47.914552  229016 command_runner.go:130] >         }
	I1030 23:25:47.914558  229016 command_runner.go:130] >         cache 30
	I1030 23:25:47.914563  229016 command_runner.go:130] >         loop
	I1030 23:25:47.914576  229016 command_runner.go:130] >         reload
	I1030 23:25:47.914587  229016 command_runner.go:130] >         loadbalance
	I1030 23:25:47.914593  229016 command_runner.go:130] >     }
	I1030 23:25:47.914604  229016 command_runner.go:130] > kind: ConfigMap
	I1030 23:25:47.914610  229016 command_runner.go:130] > metadata:
	I1030 23:25:47.914623  229016 command_runner.go:130] >   creationTimestamp: "2023-10-30T23:25:35Z"
	I1030 23:25:47.914632  229016 command_runner.go:130] >   name: coredns
	I1030 23:25:47.914639  229016 command_runner.go:130] >   namespace: kube-system
	I1030 23:25:47.914647  229016 command_runner.go:130] >   resourceVersion: "230"
	I1030 23:25:47.914652  229016 command_runner.go:130] >   uid: d4073356-9e8a-4259-8732-9beb303b7aee
	I1030 23:25:47.916058  229016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1030 23:25:47.916366  229016 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:25:47.916694  229016 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:25:47.917491  229016 node_ready.go:35] waiting up to 6m0s for node "multinode-370491" to be "Ready" ...
	I1030 23:25:47.917619  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:47.917628  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:47.917638  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:47.917647  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:47.919330  229016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1030 23:25:47.921384  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:47.921404  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:47.921414  229016 round_trippers.go:580]     Audit-Id: e9a39dd6-bb1b-4f68-b13d-ae6df66c6558
	I1030 23:25:47.921422  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:47.921432  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:47.921444  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:47.921455  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:47.921467  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:47 GMT
	I1030 23:25:47.921644  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:47.922458  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:47.922479  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:47.922490  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:47.922500  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:47.924487  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:25:47.924507  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:47.924517  229016 round_trippers.go:580]     Audit-Id: a178af1b-2bf5-4898-ba57-20121ed850f6
	I1030 23:25:47.924526  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:47.924532  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:47.924539  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:47.924544  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:47.924555  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:47 GMT
	I1030 23:25:47.924681  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:47.962597  229016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1030 23:25:48.426209  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:48.426232  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:48.426241  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:48.426248  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:48.435770  229016 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1030 23:25:48.435798  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:48.435806  229016 round_trippers.go:580]     Audit-Id: f85de155-850d-4542-8c4c-94d02afce560
	I1030 23:25:48.435811  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:48.435817  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:48.435822  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:48.435827  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:48.435832  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:48 GMT
	I1030 23:25:48.442773  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:48.657433  229016 command_runner.go:130] > configmap/coredns replaced
	I1030 23:25:48.667144  229016 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1030 23:25:48.883104  229016 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1030 23:25:48.890394  229016 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1030 23:25:48.898523  229016 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1030 23:25:48.915718  229016 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1030 23:25:48.924211  229016 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1030 23:25:48.925371  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:48.925387  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:48.925396  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:48.925412  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:48.929443  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:48.929467  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:48.929475  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:48.929481  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:48 GMT
	I1030 23:25:48.929486  229016 round_trippers.go:580]     Audit-Id: 5843e3c5-136b-475b-bcc1-5e20713ba6c5
	I1030 23:25:48.929494  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:48.929503  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:48.929513  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:48.929687  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:48.940155  229016 command_runner.go:130] > pod/storage-provisioner created
	I1030 23:25:48.942821  229016 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02346484s)
	I1030 23:25:48.942847  229016 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1030 23:25:48.942903  229016 main.go:141] libmachine: Making call to close driver server
	I1030 23:25:48.942925  229016 main.go:141] libmachine: (multinode-370491) Calling .Close
	I1030 23:25:48.942907  229016 main.go:141] libmachine: Making call to close driver server
	I1030 23:25:48.942986  229016 main.go:141] libmachine: (multinode-370491) Calling .Close
	I1030 23:25:48.943267  229016 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:25:48.943284  229016 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:25:48.943293  229016 main.go:141] libmachine: Making call to close driver server
	I1030 23:25:48.943306  229016 main.go:141] libmachine: (multinode-370491) Calling .Close
	I1030 23:25:48.943311  229016 main.go:141] libmachine: (multinode-370491) DBG | Closing plugin on server side
	I1030 23:25:48.943337  229016 main.go:141] libmachine: (multinode-370491) DBG | Closing plugin on server side
	I1030 23:25:48.943414  229016 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:25:48.943440  229016 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:25:48.943459  229016 main.go:141] libmachine: Making call to close driver server
	I1030 23:25:48.943513  229016 main.go:141] libmachine: (multinode-370491) Calling .Close
	I1030 23:25:48.943534  229016 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:25:48.943566  229016 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:25:48.943569  229016 main.go:141] libmachine: (multinode-370491) DBG | Closing plugin on server side
	I1030 23:25:48.943677  229016 round_trippers.go:463] GET https://192.168.39.231:8443/apis/storage.k8s.io/v1/storageclasses
	I1030 23:25:48.943687  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:48.943694  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:48.943700  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:48.944052  229016 main.go:141] libmachine: (multinode-370491) DBG | Closing plugin on server side
	I1030 23:25:48.944055  229016 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:25:48.944078  229016 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:25:48.954647  229016 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1030 23:25:48.954664  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:48.954672  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:48.954681  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:48.954690  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:48.954701  229016 round_trippers.go:580]     Content-Length: 1273
	I1030 23:25:48.954710  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:48 GMT
	I1030 23:25:48.954719  229016 round_trippers.go:580]     Audit-Id: e95508c0-790d-448e-8db9-10f9ba83d723
	I1030 23:25:48.954727  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:48.954760  229016 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"372"},"items":[{"metadata":{"name":"standard","uid":"4e73166d-db78-4927-86ad-8dce9e3a57b7","resourceVersion":"361","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1030 23:25:48.955296  229016 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4e73166d-db78-4927-86ad-8dce9e3a57b7","resourceVersion":"361","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1030 23:25:48.955357  229016 round_trippers.go:463] PUT https://192.168.39.231:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1030 23:25:48.955382  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:48.955394  229016 round_trippers.go:473]     Content-Type: application/json
	I1030 23:25:48.955407  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:48.955420  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:48.958338  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:48.958358  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:48.958368  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:48.958377  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:48.958386  229016 round_trippers.go:580]     Content-Length: 1220
	I1030 23:25:48.958398  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:48 GMT
	I1030 23:25:48.958410  229016 round_trippers.go:580]     Audit-Id: c2516ecf-4107-418c-ae45-de7b8f46643f
	I1030 23:25:48.958419  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:48.958431  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:48.958464  229016 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4e73166d-db78-4927-86ad-8dce9e3a57b7","resourceVersion":"361","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1030 23:25:48.958609  229016 main.go:141] libmachine: Making call to close driver server
	I1030 23:25:48.958625  229016 main.go:141] libmachine: (multinode-370491) Calling .Close
	I1030 23:25:48.958878  229016 main.go:141] libmachine: Successfully made call to close driver server
	I1030 23:25:48.958892  229016 main.go:141] libmachine: Making call to close connection to plugin binary
	I1030 23:25:48.960654  229016 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1030 23:25:48.962000  229016 addons.go:502] enable addons completed in 1.257397764s: enabled=[storage-provisioner default-storageclass]
	I1030 23:25:49.426002  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:49.426026  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:49.426035  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:49.426048  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:49.428893  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:49.428918  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:49.428925  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:49.428930  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:49.428947  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:49.428953  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:49 GMT
	I1030 23:25:49.428959  229016 round_trippers.go:580]     Audit-Id: 0e726210-ca38-4e1a-9c35-7a596380977e
	I1030 23:25:49.428966  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:49.429313  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:49.926063  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:49.926091  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:49.926100  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:49.926107  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:49.933321  229016 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1030 23:25:49.933353  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:49.933364  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:49.933373  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:49 GMT
	I1030 23:25:49.933382  229016 round_trippers.go:580]     Audit-Id: a8171c37-35f7-47fc-bcf0-5110812eb325
	I1030 23:25:49.933391  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:49.933400  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:49.933413  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:49.933640  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:49.934088  229016 node_ready.go:58] node "multinode-370491" has status "Ready":"False"
	I1030 23:25:50.425302  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:50.425328  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:50.425337  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:50.425344  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:50.427924  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:50.427948  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:50.427958  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:50.427967  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:50 GMT
	I1030 23:25:50.427975  229016 round_trippers.go:580]     Audit-Id: c6a7ea8d-9576-454d-af7d-53c4602d5f7b
	I1030 23:25:50.427984  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:50.427996  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:50.428010  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:50.428367  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:50.926144  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:50.926177  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:50.926187  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:50.926193  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:50.930059  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:50.930087  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:50.930096  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:50.930106  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:50.930112  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:50 GMT
	I1030 23:25:50.930117  229016 round_trippers.go:580]     Audit-Id: 0ad1af83-a499-4237-a5e6-dee076955483
	I1030 23:25:50.930122  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:50.930128  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:50.930478  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:51.426213  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:51.426239  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:51.426247  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:51.426253  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:51.428920  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:51.428955  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:51.428965  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:51.428973  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:51 GMT
	I1030 23:25:51.428981  229016 round_trippers.go:580]     Audit-Id: fc42a143-0a70-4acd-8c59-3bbf840fa51c
	I1030 23:25:51.428989  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:51.428998  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:51.429004  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:51.429665  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:51.925459  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:51.925488  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:51.925498  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:51.925504  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:51.928403  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:51.928427  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:51.928435  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:51 GMT
	I1030 23:25:51.928444  229016 round_trippers.go:580]     Audit-Id: 56912104-d31b-44c0-b326-7c2ef77423e4
	I1030 23:25:51.928454  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:51.928462  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:51.928471  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:51.928479  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:51.928607  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:52.425309  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:52.425340  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:52.425349  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:52.425356  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:52.428627  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:52.428646  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:52.428653  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:52.428660  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:52.428668  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:52.428676  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:52 GMT
	I1030 23:25:52.428684  229016 round_trippers.go:580]     Audit-Id: 864c51f9-6cb2-40a7-9916-65d7bdd09d5f
	I1030 23:25:52.428692  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:52.428955  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:52.429425  229016 node_ready.go:58] node "multinode-370491" has status "Ready":"False"
	I1030 23:25:52.926320  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:52.926356  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:52.926369  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:52.926378  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:52.931166  229016 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:25:52.931191  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:52.931198  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:52.931203  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:52.931209  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:52 GMT
	I1030 23:25:52.931214  229016 round_trippers.go:580]     Audit-Id: 979b5061-e03c-41ff-820a-f76b7750148d
	I1030 23:25:52.931219  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:52.931224  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:52.931451  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"315","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6209 chars]
	I1030 23:25:53.426197  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:53.426227  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:53.426240  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:53.426250  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:53.431689  229016 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 23:25:53.431710  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:53.431717  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:53.431722  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:53.431727  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:53.431732  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:53 GMT
	I1030 23:25:53.431739  229016 round_trippers.go:580]     Audit-Id: 2e9d4283-14df-4d87-974b-cc0d907285e2
	I1030 23:25:53.431746  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:53.432492  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:53.432804  229016 node_ready.go:49] node "multinode-370491" has status "Ready":"True"
	I1030 23:25:53.432817  229016 node_ready.go:38] duration metric: took 5.515300583s waiting for node "multinode-370491" to be "Ready" ...
	I1030 23:25:53.432827  229016 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:25:53.432882  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:25:53.432890  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:53.432897  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:53.432903  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:53.440469  229016 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1030 23:25:53.440497  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:53.440507  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:53.440516  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:53.440524  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:53.440530  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:53.440539  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:53 GMT
	I1030 23:25:53.440549  229016 round_trippers.go:580]     Audit-Id: e4239879-6c77-4b4f-9e5d-5786124fdde6
	I1030 23:25:53.444734  229016 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"394"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"392","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54592 chars]
	I1030 23:25:53.447688  229016 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:53.447759  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:25:53.447768  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:53.447775  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:53.447780  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:53.451348  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:53.451368  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:53.451377  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:53.451385  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:53.451393  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:53 GMT
	I1030 23:25:53.451401  229016 round_trippers.go:580]     Audit-Id: 29429864-7f6f-4f9f-9bb9-36099db77d24
	I1030 23:25:53.451409  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:53.451421  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:53.451641  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"392","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1030 23:25:53.452047  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:53.452062  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:53.452069  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:53.452075  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:53.455261  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:53.455280  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:53.455289  229016 round_trippers.go:580]     Audit-Id: 1eaccdab-ac0d-4f21-8168-aba8e87f0222
	I1030 23:25:53.455297  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:53.455307  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:53.455315  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:53.455323  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:53.455332  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:53 GMT
	I1030 23:25:53.455467  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:53.455895  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:25:53.455912  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:53.455923  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:53.455932  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:53.458974  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:53.458990  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:53.458997  229016 round_trippers.go:580]     Audit-Id: ffa81680-0bc7-4c9d-8584-5183b086307e
	I1030 23:25:53.459003  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:53.459007  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:53.459012  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:53.459017  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:53.459022  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:53 GMT
	I1030 23:25:53.459316  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"392","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1030 23:25:53.459787  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:53.459805  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:53.459815  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:53.459825  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:53.462593  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:53.462609  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:53.462615  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:53 GMT
	I1030 23:25:53.462620  229016 round_trippers.go:580]     Audit-Id: 8db798cd-0318-48c7-82b7-dee47e78872e
	I1030 23:25:53.462625  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:53.462630  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:53.462635  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:53.462640  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:53.462881  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:53.964030  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:25:53.964062  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:53.964071  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:53.964077  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:53.966981  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:53.967018  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:53.967029  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:53 GMT
	I1030 23:25:53.967038  229016 round_trippers.go:580]     Audit-Id: 90764410-ba47-4e24-b634-cd173f774d2f
	I1030 23:25:53.967046  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:53.967053  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:53.967061  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:53.967069  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:53.967257  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"392","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1030 23:25:53.967921  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:53.967944  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:53.967955  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:53.967965  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:53.970300  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:53.970324  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:53.970334  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:53 GMT
	I1030 23:25:53.970343  229016 round_trippers.go:580]     Audit-Id: 1c68879d-79f8-44dc-960d-6369d0de21ef
	I1030 23:25:53.970351  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:53.970358  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:53.970367  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:53.970375  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:53.970702  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:54.464401  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:25:54.464426  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:54.464434  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:54.464440  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:54.467372  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:54.467400  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:54.467411  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:54.467421  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:54 GMT
	I1030 23:25:54.467431  229016 round_trippers.go:580]     Audit-Id: 98401071-d562-413b-9fed-114ac1c822c2
	I1030 23:25:54.467438  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:54.467449  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:54.467457  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:54.467665  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"392","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1030 23:25:54.468270  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:54.468292  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:54.468305  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:54.468321  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:54.470375  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:54.470396  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:54.470405  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:54.470412  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:54.470419  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:54.470429  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:54 GMT
	I1030 23:25:54.470439  229016 round_trippers.go:580]     Audit-Id: 36f822bc-f85e-4a1f-9925-4d80a788f6d8
	I1030 23:25:54.470446  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:54.470665  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:54.964397  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:25:54.964420  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:54.964429  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:54.964435  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:54.967237  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:54.967258  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:54.967265  229016 round_trippers.go:580]     Audit-Id: d2eb73e3-da59-40f8-b347-a96240f01ac4
	I1030 23:25:54.967271  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:54.967276  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:54.967281  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:54.967287  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:54.967294  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:54 GMT
	I1030 23:25:54.967565  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"407","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1030 23:25:54.968171  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:54.968190  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:54.968198  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:54.968204  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:54.970609  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:54.970630  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:54.970640  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:54.970647  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:54.970653  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:54.970658  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:54.970663  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:54 GMT
	I1030 23:25:54.970667  229016 round_trippers.go:580]     Audit-Id: a3fdce00-6dec-4983-b598-96de60f4f4e6
	I1030 23:25:54.970848  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:54.971254  229016 pod_ready.go:92] pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace has status "Ready":"True"
	I1030 23:25:54.971272  229016 pod_ready.go:81] duration metric: took 1.523561121s waiting for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:54.971285  229016 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:54.971346  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:25:54.971357  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:54.971368  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:54.971379  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:54.973441  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:54.973462  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:54.973471  229016 round_trippers.go:580]     Audit-Id: 88eea7fa-a7d4-4b3f-b3ab-ffd0c8f6eda3
	I1030 23:25:54.973479  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:54.973487  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:54.973495  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:54.973503  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:54.973511  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:54 GMT
	I1030 23:25:54.973920  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"313","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I1030 23:25:54.974312  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:54.974325  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:54.974334  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:54.974340  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:54.976459  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:54.976475  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:54.976485  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:54.976492  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:54.976501  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:54.976510  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:54.976520  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:54 GMT
	I1030 23:25:54.976539  229016 round_trippers.go:580]     Audit-Id: 843e9bbc-2318-4cdd-b592-c2c46d442a2c
	I1030 23:25:54.976709  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:54.977077  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:25:54.977089  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:54.977099  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:54.977111  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:54.978656  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:25:54.978673  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:54.978682  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:54.978691  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:54.978699  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:54 GMT
	I1030 23:25:54.978708  229016 round_trippers.go:580]     Audit-Id: 4a123dba-22c3-426e-bdcc-de4660bbd13e
	I1030 23:25:54.978718  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:54.978732  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:54.978840  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"313","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I1030 23:25:54.979197  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:54.979209  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:54.979216  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:54.979221  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:54.980853  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:25:54.980869  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:54.980877  229016 round_trippers.go:580]     Audit-Id: 2198ecf3-24c7-44ab-bdbd-5f2ca59842d1
	I1030 23:25:54.980886  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:54.980895  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:54.980904  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:54.980917  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:54.980927  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:54 GMT
	I1030 23:25:54.981128  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:55.482414  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:25:55.482441  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:55.482453  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:55.482462  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:55.486852  229016 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:25:55.486879  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:55.486890  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:55.486898  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:55.486903  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:55.486908  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:55 GMT
	I1030 23:25:55.486913  229016 round_trippers.go:580]     Audit-Id: 59b4eead-195a-49fc-a30e-626803c21948
	I1030 23:25:55.486919  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:55.487173  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"313","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I1030 23:25:55.487736  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:55.487754  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:55.487766  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:55.487775  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:55.490083  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:55.490108  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:55.490118  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:55.490126  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:55.490137  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:55.490152  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:55 GMT
	I1030 23:25:55.490162  229016 round_trippers.go:580]     Audit-Id: 688f5d3c-e88b-4840-bde0-b48a12281b8d
	I1030 23:25:55.490175  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:55.490372  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:55.982101  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:25:55.982137  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:55.982147  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:55.982155  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:55.984666  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:55.984687  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:55.984695  229016 round_trippers.go:580]     Audit-Id: 191a2e20-8c30-438d-aedc-34ea6f29ae93
	I1030 23:25:55.984700  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:55.984705  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:55.984710  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:55.984715  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:55.984722  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:55 GMT
	I1030 23:25:55.984897  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"413","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1030 23:25:55.985434  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:55.985452  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:55.985461  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:55.985468  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:55.988223  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:55.988242  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:55.988250  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:55.988257  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:55.988262  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:55.988267  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:55.988272  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:55 GMT
	I1030 23:25:55.988279  229016 round_trippers.go:580]     Audit-Id: c012b4a4-6a13-443f-b98d-637a5c370a8c
	I1030 23:25:55.988519  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:55.988842  229016 pod_ready.go:92] pod "etcd-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:25:55.988858  229016 pod_ready.go:81] duration metric: took 1.017566012s waiting for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:55.988873  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:55.988921  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:25:55.988929  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:55.988949  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:55.988964  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:55.991577  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:55.991593  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:55.991600  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:55.991605  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:55.991610  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:55.991627  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:55 GMT
	I1030 23:25:55.991639  229016 round_trippers.go:580]     Audit-Id: 0f49ad55-0367-4845-a9b6-f498f2a2e613
	I1030 23:25:55.991650  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:55.991813  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-370491","namespace":"kube-system","uid":"d1874c7c-46ee-42eb-a395-c0d0138b3422","resourceVersion":"414","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.231:8443","kubernetes.io/config.hash":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.mirror":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.seen":"2023-10-30T23:25:35.493664410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1030 23:25:55.992175  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:55.992186  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:55.992197  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:55.992210  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:55.994110  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:25:55.994125  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:55.994141  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:55.994149  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:55.994161  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:55 GMT
	I1030 23:25:55.994171  229016 round_trippers.go:580]     Audit-Id: 9647a30e-ca0b-425c-9bf0-4af19fc857f0
	I1030 23:25:55.994178  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:55.994185  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:55.994468  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:55.994843  229016 pod_ready.go:92] pod "kube-apiserver-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:25:55.994860  229016 pod_ready.go:81] duration metric: took 5.978608ms waiting for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:55.994871  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:56.027153  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-370491
	I1030 23:25:56.027180  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:56.027188  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:56.027194  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:56.029458  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:56.029473  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:56.029479  229016 round_trippers.go:580]     Audit-Id: 472054e5-6002-404a-986d-848c8ac3e565
	I1030 23:25:56.029485  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:56.029494  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:56.029503  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:56.029513  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:56.029522  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:56 GMT
	I1030 23:25:56.029812  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-370491","namespace":"kube-system","uid":"4da6c57f-cec4-498b-a390-3fa2f8619a0b","resourceVersion":"415","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.mirror":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.seen":"2023-10-30T23:25:35.493665415Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1030 23:25:56.226678  229016 request.go:629] Waited for 196.37348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:56.226742  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:56.226747  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:56.226755  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:56.226760  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:56.230085  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:56.230107  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:56.230114  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:56.230120  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:56.230125  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:56.230132  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:56 GMT
	I1030 23:25:56.230137  229016 round_trippers.go:580]     Audit-Id: 8760acc9-6c00-403e-b5ca-ec1bf1d12895
	I1030 23:25:56.230142  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:56.230577  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:56.231006  229016 pod_ready.go:92] pod "kube-controller-manager-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:25:56.231026  229016 pod_ready.go:81] duration metric: took 236.145522ms waiting for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:56.231042  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:56.426438  229016 request.go:629] Waited for 195.31098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:25:56.426510  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:25:56.426515  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:56.426523  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:56.426529  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:56.429738  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:56.429760  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:56.429766  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:56.429772  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:56 GMT
	I1030 23:25:56.429778  229016 round_trippers.go:580]     Audit-Id: b98c72b5-c121-4a58-89cb-f745b83ee1b0
	I1030 23:25:56.429786  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:56.429794  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:56.429803  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:56.430188  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xbsl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1","resourceVersion":"377","creationTimestamp":"2023-10-30T23:25:47Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1030 23:25:56.626408  229016 request.go:629] Waited for 195.673446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:56.626474  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:56.626479  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:56.626486  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:56.626493  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:56.629583  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:56.629605  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:56.629612  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:56.629617  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:56 GMT
	I1030 23:25:56.629622  229016 round_trippers.go:580]     Audit-Id: b4ea549e-3190-4532-85f5-d98e19264d24
	I1030 23:25:56.629627  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:56.629633  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:56.629641  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:56.629905  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:56.630331  229016 pod_ready.go:92] pod "kube-proxy-xbsl5" in "kube-system" namespace has status "Ready":"True"
	I1030 23:25:56.630349  229016 pod_ready.go:81] duration metric: took 399.299799ms waiting for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:56.630362  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:56.826847  229016 request.go:629] Waited for 196.40546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:25:56.826969  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:25:56.826980  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:56.826988  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:56.826994  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:56.830155  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:56.830179  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:56.830186  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:56.830192  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:56.830197  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:56.830204  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:56 GMT
	I1030 23:25:56.830209  229016 round_trippers.go:580]     Audit-Id: 75f8b769-69cd-4b75-9fbe-937ea51e316d
	I1030 23:25:56.830214  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:56.830364  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-370491","namespace":"kube-system","uid":"b71476bb-1843-4ff9-8639-40ae73b72c8b","resourceVersion":"379","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.mirror":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.seen":"2023-10-30T23:25:35.493666103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1030 23:25:57.027242  229016 request.go:629] Waited for 196.455335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:57.027319  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:25:57.027326  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:57.027336  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:57.027346  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:57.030009  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:25:57.030033  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:57.030040  229016 round_trippers.go:580]     Audit-Id: 090a1df1-22b2-4a9b-a352-04717eb78769
	I1030 23:25:57.030046  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:57.030051  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:57.030056  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:57.030062  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:57.030069  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:57 GMT
	I1030 23:25:57.030455  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"388","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6026 chars]
	I1030 23:25:57.030767  229016 pod_ready.go:92] pod "kube-scheduler-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:25:57.030781  229016 pod_ready.go:81] duration metric: took 400.407499ms waiting for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:25:57.030791  229016 pod_ready.go:38] duration metric: took 3.597955168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:25:57.030806  229016 api_server.go:52] waiting for apiserver process to appear ...
	I1030 23:25:57.030859  229016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:25:57.047175  229016 command_runner.go:130] > 1063
	I1030 23:25:57.047232  229016 api_server.go:72] duration metric: took 9.308231504s to wait for apiserver process to appear ...
	I1030 23:25:57.047288  229016 api_server.go:88] waiting for apiserver healthz status ...
	I1030 23:25:57.047313  229016 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1030 23:25:57.053064  229016 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I1030 23:25:57.053132  229016 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I1030 23:25:57.053139  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:57.053147  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:57.053155  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:57.054304  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:25:57.054326  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:57.054337  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:57.054345  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:57.054357  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:57.054367  229016 round_trippers.go:580]     Content-Length: 264
	I1030 23:25:57.054377  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:57 GMT
	I1030 23:25:57.054389  229016 round_trippers.go:580]     Audit-Id: e1db8137-677c-4fb6-8183-caeec4908be9
	I1030 23:25:57.054398  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:57.054423  229016 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1030 23:25:57.054548  229016 api_server.go:141] control plane version: v1.28.3
	I1030 23:25:57.054567  229016 api_server.go:131] duration metric: took 7.268757ms to wait for apiserver health ...
	I1030 23:25:57.054578  229016 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 23:25:57.227004  229016 request.go:629] Waited for 172.35135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:25:57.227083  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:25:57.227089  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:57.227103  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:57.227110  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:57.234123  229016 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 23:25:57.234151  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:57.234163  229016 round_trippers.go:580]     Audit-Id: 620ceabb-c1cd-4568-9611-1d850dfa21dd
	I1030 23:25:57.234172  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:57.234182  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:57.234191  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:57.234200  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:57.234212  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:57 GMT
	I1030 23:25:57.237105  229016 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"407","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53995 chars]
	I1030 23:25:57.239140  229016 system_pods.go:59] 8 kube-system pods found
	I1030 23:25:57.239164  229016 system_pods.go:61] "coredns-5dd5756b68-6pgvt" [d854be1d-ae4e-420a-9853-253f0258915c] Running
	I1030 23:25:57.239169  229016 system_pods.go:61] "etcd-multinode-370491" [eb24307f-f00b-4406-bb05-b18eafd0eca1] Running
	I1030 23:25:57.239172  229016 system_pods.go:61] "kindnet-m9f5k" [a79ceb52-48df-4240-9edc-05c81bf58f73] Running
	I1030 23:25:57.239177  229016 system_pods.go:61] "kube-apiserver-multinode-370491" [d1874c7c-46ee-42eb-a395-c0d0138b3422] Running
	I1030 23:25:57.239182  229016 system_pods.go:61] "kube-controller-manager-multinode-370491" [4da6c57f-cec4-498b-a390-3fa2f8619a0b] Running
	I1030 23:25:57.239186  229016 system_pods.go:61] "kube-proxy-xbsl5" [eb41a78a-bf80-4546-b7d6-423a8c3ad0e1] Running
	I1030 23:25:57.239190  229016 system_pods.go:61] "kube-scheduler-multinode-370491" [b71476bb-1843-4ff9-8639-40ae73b72c8b] Running
	I1030 23:25:57.239194  229016 system_pods.go:61] "storage-provisioner" [6f2bbacd-e138-4f82-961e-76f1daf88ccd] Running
	I1030 23:25:57.239199  229016 system_pods.go:74] duration metric: took 184.614526ms to wait for pod list to return data ...
	I1030 23:25:57.239210  229016 default_sa.go:34] waiting for default service account to be created ...
	I1030 23:25:57.426693  229016 request.go:629] Waited for 187.383079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I1030 23:25:57.426771  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I1030 23:25:57.426780  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:57.426793  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:57.426803  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:57.433239  229016 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 23:25:57.433271  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:57.433281  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:57.433289  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:57.433296  229016 round_trippers.go:580]     Content-Length: 261
	I1030 23:25:57.433304  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:57 GMT
	I1030 23:25:57.433312  229016 round_trippers.go:580]     Audit-Id: 263f6465-b4c8-4bac-b75f-3e94942fa000
	I1030 23:25:57.433320  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:57.433333  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:57.433518  229016 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"88ed5dd6-6353-42c8-b32f-dd95ef92c5ee","resourceVersion":"297","creationTimestamp":"2023-10-30T23:25:47Z"}}]}
	I1030 23:25:57.433798  229016 default_sa.go:45] found service account: "default"
	I1030 23:25:57.433828  229016 default_sa.go:55] duration metric: took 194.611474ms for default service account to be created ...
	I1030 23:25:57.433843  229016 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 23:25:57.626282  229016 request.go:629] Waited for 192.328657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:25:57.626349  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:25:57.626355  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:57.626363  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:57.626369  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:57.629767  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:57.629801  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:57.629811  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:57 GMT
	I1030 23:25:57.629819  229016 round_trippers.go:580]     Audit-Id: 062c6ecc-8260-4f1e-9e6b-7f02184c1f82
	I1030 23:25:57.629827  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:57.629835  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:57.629843  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:57.629851  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:57.630863  229016 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"407","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53995 chars]
	I1030 23:25:57.633029  229016 system_pods.go:86] 8 kube-system pods found
	I1030 23:25:57.633055  229016 system_pods.go:89] "coredns-5dd5756b68-6pgvt" [d854be1d-ae4e-420a-9853-253f0258915c] Running
	I1030 23:25:57.633060  229016 system_pods.go:89] "etcd-multinode-370491" [eb24307f-f00b-4406-bb05-b18eafd0eca1] Running
	I1030 23:25:57.633064  229016 system_pods.go:89] "kindnet-m9f5k" [a79ceb52-48df-4240-9edc-05c81bf58f73] Running
	I1030 23:25:57.633068  229016 system_pods.go:89] "kube-apiserver-multinode-370491" [d1874c7c-46ee-42eb-a395-c0d0138b3422] Running
	I1030 23:25:57.633073  229016 system_pods.go:89] "kube-controller-manager-multinode-370491" [4da6c57f-cec4-498b-a390-3fa2f8619a0b] Running
	I1030 23:25:57.633076  229016 system_pods.go:89] "kube-proxy-xbsl5" [eb41a78a-bf80-4546-b7d6-423a8c3ad0e1] Running
	I1030 23:25:57.633080  229016 system_pods.go:89] "kube-scheduler-multinode-370491" [b71476bb-1843-4ff9-8639-40ae73b72c8b] Running
	I1030 23:25:57.633084  229016 system_pods.go:89] "storage-provisioner" [6f2bbacd-e138-4f82-961e-76f1daf88ccd] Running
	I1030 23:25:57.633091  229016 system_pods.go:126] duration metric: took 199.241822ms to wait for k8s-apps to be running ...
	I1030 23:25:57.633099  229016 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 23:25:57.633144  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:25:57.648406  229016 system_svc.go:56] duration metric: took 15.295842ms WaitForService to wait for kubelet.
	I1030 23:25:57.648436  229016 kubeadm.go:581] duration metric: took 9.909443864s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1030 23:25:57.648456  229016 node_conditions.go:102] verifying NodePressure condition ...
	I1030 23:25:57.826677  229016 request.go:629] Waited for 178.14159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I1030 23:25:57.826754  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I1030 23:25:57.826761  229016 round_trippers.go:469] Request Headers:
	I1030 23:25:57.826771  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:25:57.826782  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:25:57.830296  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:25:57.830326  229016 round_trippers.go:577] Response Headers:
	I1030 23:25:57.830336  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:25:57.830344  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:25:57.830352  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:25:57 GMT
	I1030 23:25:57.830359  229016 round_trippers.go:580]     Audit-Id: 8191e3a3-dfb6-4bba-86f3-01110955f689
	I1030 23:25:57.830366  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:25:57.830375  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:25:57.830537  229016 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"418","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 5959 chars]
	I1030 23:25:57.831040  229016 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:25:57.831067  229016 node_conditions.go:123] node cpu capacity is 2
	I1030 23:25:57.831077  229016 node_conditions.go:105] duration metric: took 182.61745ms to run NodePressure ...
	I1030 23:25:57.831088  229016 start.go:228] waiting for startup goroutines ...
	I1030 23:25:57.831098  229016 start.go:233] waiting for cluster config update ...
	I1030 23:25:57.831108  229016 start.go:242] writing updated cluster config ...
	I1030 23:25:57.833267  229016 out.go:177] 
	I1030 23:25:57.834955  229016 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:25:57.835054  229016 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:25:57.836889  229016 out.go:177] * Starting worker node multinode-370491-m02 in cluster multinode-370491
	I1030 23:25:57.838243  229016 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:25:57.838272  229016 cache.go:56] Caching tarball of preloaded images
	I1030 23:25:57.838372  229016 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 23:25:57.838383  229016 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1030 23:25:57.838440  229016 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:25:57.838596  229016 start.go:365] acquiring machines lock for multinode-370491-m02: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:25:57.838639  229016 start.go:369] acquired machines lock for "multinode-370491-m02" in 23.545µs
	I1030 23:25:57.838656  229016 start.go:93] Provisioning new machine with config: &{Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1030 23:25:57.838713  229016 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1030 23:25:57.840462  229016 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1030 23:25:57.840541  229016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:25:57.840574  229016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:25:57.855133  229016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I1030 23:25:57.855565  229016 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:25:57.856014  229016 main.go:141] libmachine: Using API Version  1
	I1030 23:25:57.856036  229016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:25:57.856346  229016 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:25:57.856533  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetMachineName
	I1030 23:25:57.856662  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:25:57.856817  229016 start.go:159] libmachine.API.Create for "multinode-370491" (driver="kvm2")
	I1030 23:25:57.856848  229016 client.go:168] LocalClient.Create starting
	I1030 23:25:57.856885  229016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem
	I1030 23:25:57.856924  229016 main.go:141] libmachine: Decoding PEM data...
	I1030 23:25:57.856965  229016 main.go:141] libmachine: Parsing certificate...
	I1030 23:25:57.857022  229016 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem
	I1030 23:25:57.857042  229016 main.go:141] libmachine: Decoding PEM data...
	I1030 23:25:57.857053  229016 main.go:141] libmachine: Parsing certificate...
	I1030 23:25:57.857074  229016 main.go:141] libmachine: Running pre-create checks...
	I1030 23:25:57.857084  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .PreCreateCheck
	I1030 23:25:57.857270  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetConfigRaw
	I1030 23:25:57.857665  229016 main.go:141] libmachine: Creating machine...
	I1030 23:25:57.857683  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .Create
	I1030 23:25:57.857817  229016 main.go:141] libmachine: (multinode-370491-m02) Creating KVM machine...
	I1030 23:25:57.858873  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found existing default KVM network
	I1030 23:25:57.859020  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found existing private KVM network mk-multinode-370491
	I1030 23:25:57.859179  229016 main.go:141] libmachine: (multinode-370491-m02) Setting up store path in /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02 ...
	I1030 23:25:57.859206  229016 main.go:141] libmachine: (multinode-370491-m02) Building disk image from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso
	I1030 23:25:57.859310  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:25:57.859182  229377 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:25:57.859422  229016 main.go:141] libmachine: (multinode-370491-m02) Downloading /home/jenkins/minikube-integration/17527-208817/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso...
	I1030 23:25:58.087659  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:25:58.087518  229377 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa...
	I1030 23:25:58.209998  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:25:58.209871  229377 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/multinode-370491-m02.rawdisk...
	I1030 23:25:58.210034  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Writing magic tar header
	I1030 23:25:58.210054  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Writing SSH key tar header
	I1030 23:25:58.210070  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:25:58.210010  229377 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02 ...
	I1030 23:25:58.210169  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02
	I1030 23:25:58.210191  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines
	I1030 23:25:58.210205  229016 main.go:141] libmachine: (multinode-370491-m02) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02 (perms=drwx------)
	I1030 23:25:58.210219  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:25:58.210231  229016 main.go:141] libmachine: (multinode-370491-m02) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines (perms=drwxr-xr-x)
	I1030 23:25:58.210243  229016 main.go:141] libmachine: (multinode-370491-m02) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube (perms=drwxr-xr-x)
	I1030 23:25:58.210253  229016 main.go:141] libmachine: (multinode-370491-m02) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817 (perms=drwxrwxr-x)
	I1030 23:25:58.210261  229016 main.go:141] libmachine: (multinode-370491-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1030 23:25:58.210271  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817
	I1030 23:25:58.210277  229016 main.go:141] libmachine: (multinode-370491-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1030 23:25:58.210286  229016 main.go:141] libmachine: (multinode-370491-m02) Creating domain...
	I1030 23:25:58.210328  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1030 23:25:58.210360  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Checking permissions on dir: /home/jenkins
	I1030 23:25:58.210405  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Checking permissions on dir: /home
	I1030 23:25:58.210433  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Skipping /home - not owner
	I1030 23:25:58.211285  229016 main.go:141] libmachine: (multinode-370491-m02) define libvirt domain using xml: 
	I1030 23:25:58.211308  229016 main.go:141] libmachine: (multinode-370491-m02) <domain type='kvm'>
	I1030 23:25:58.211319  229016 main.go:141] libmachine: (multinode-370491-m02)   <name>multinode-370491-m02</name>
	I1030 23:25:58.211333  229016 main.go:141] libmachine: (multinode-370491-m02)   <memory unit='MiB'>2200</memory>
	I1030 23:25:58.211346  229016 main.go:141] libmachine: (multinode-370491-m02)   <vcpu>2</vcpu>
	I1030 23:25:58.211352  229016 main.go:141] libmachine: (multinode-370491-m02)   <features>
	I1030 23:25:58.211358  229016 main.go:141] libmachine: (multinode-370491-m02)     <acpi/>
	I1030 23:25:58.211366  229016 main.go:141] libmachine: (multinode-370491-m02)     <apic/>
	I1030 23:25:58.211372  229016 main.go:141] libmachine: (multinode-370491-m02)     <pae/>
	I1030 23:25:58.211385  229016 main.go:141] libmachine: (multinode-370491-m02)     
	I1030 23:25:58.211396  229016 main.go:141] libmachine: (multinode-370491-m02)   </features>
	I1030 23:25:58.211403  229016 main.go:141] libmachine: (multinode-370491-m02)   <cpu mode='host-passthrough'>
	I1030 23:25:58.211412  229016 main.go:141] libmachine: (multinode-370491-m02)   
	I1030 23:25:58.211425  229016 main.go:141] libmachine: (multinode-370491-m02)   </cpu>
	I1030 23:25:58.211436  229016 main.go:141] libmachine: (multinode-370491-m02)   <os>
	I1030 23:25:58.211455  229016 main.go:141] libmachine: (multinode-370491-m02)     <type>hvm</type>
	I1030 23:25:58.211473  229016 main.go:141] libmachine: (multinode-370491-m02)     <boot dev='cdrom'/>
	I1030 23:25:58.211484  229016 main.go:141] libmachine: (multinode-370491-m02)     <boot dev='hd'/>
	I1030 23:25:58.211496  229016 main.go:141] libmachine: (multinode-370491-m02)     <bootmenu enable='no'/>
	I1030 23:25:58.211510  229016 main.go:141] libmachine: (multinode-370491-m02)   </os>
	I1030 23:25:58.211525  229016 main.go:141] libmachine: (multinode-370491-m02)   <devices>
	I1030 23:25:58.211536  229016 main.go:141] libmachine: (multinode-370491-m02)     <disk type='file' device='cdrom'>
	I1030 23:25:58.211556  229016 main.go:141] libmachine: (multinode-370491-m02)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/boot2docker.iso'/>
	I1030 23:25:58.211570  229016 main.go:141] libmachine: (multinode-370491-m02)       <target dev='hdc' bus='scsi'/>
	I1030 23:25:58.211585  229016 main.go:141] libmachine: (multinode-370491-m02)       <readonly/>
	I1030 23:25:58.211601  229016 main.go:141] libmachine: (multinode-370491-m02)     </disk>
	I1030 23:25:58.211617  229016 main.go:141] libmachine: (multinode-370491-m02)     <disk type='file' device='disk'>
	I1030 23:25:58.211632  229016 main.go:141] libmachine: (multinode-370491-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1030 23:25:58.211654  229016 main.go:141] libmachine: (multinode-370491-m02)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/multinode-370491-m02.rawdisk'/>
	I1030 23:25:58.211667  229016 main.go:141] libmachine: (multinode-370491-m02)       <target dev='hda' bus='virtio'/>
	I1030 23:25:58.211705  229016 main.go:141] libmachine: (multinode-370491-m02)     </disk>
	I1030 23:25:58.211735  229016 main.go:141] libmachine: (multinode-370491-m02)     <interface type='network'>
	I1030 23:25:58.211772  229016 main.go:141] libmachine: (multinode-370491-m02)       <source network='mk-multinode-370491'/>
	I1030 23:25:58.211798  229016 main.go:141] libmachine: (multinode-370491-m02)       <model type='virtio'/>
	I1030 23:25:58.211813  229016 main.go:141] libmachine: (multinode-370491-m02)     </interface>
	I1030 23:25:58.211825  229016 main.go:141] libmachine: (multinode-370491-m02)     <interface type='network'>
	I1030 23:25:58.211840  229016 main.go:141] libmachine: (multinode-370491-m02)       <source network='default'/>
	I1030 23:25:58.211853  229016 main.go:141] libmachine: (multinode-370491-m02)       <model type='virtio'/>
	I1030 23:25:58.211876  229016 main.go:141] libmachine: (multinode-370491-m02)     </interface>
	I1030 23:25:58.211895  229016 main.go:141] libmachine: (multinode-370491-m02)     <serial type='pty'>
	I1030 23:25:58.211915  229016 main.go:141] libmachine: (multinode-370491-m02)       <target port='0'/>
	I1030 23:25:58.211927  229016 main.go:141] libmachine: (multinode-370491-m02)     </serial>
	I1030 23:25:58.211942  229016 main.go:141] libmachine: (multinode-370491-m02)     <console type='pty'>
	I1030 23:25:58.211955  229016 main.go:141] libmachine: (multinode-370491-m02)       <target type='serial' port='0'/>
	I1030 23:25:58.211967  229016 main.go:141] libmachine: (multinode-370491-m02)     </console>
	I1030 23:25:58.211980  229016 main.go:141] libmachine: (multinode-370491-m02)     <rng model='virtio'>
	I1030 23:25:58.211997  229016 main.go:141] libmachine: (multinode-370491-m02)       <backend model='random'>/dev/random</backend>
	I1030 23:25:58.212009  229016 main.go:141] libmachine: (multinode-370491-m02)     </rng>
	I1030 23:25:58.212023  229016 main.go:141] libmachine: (multinode-370491-m02)     
	I1030 23:25:58.212035  229016 main.go:141] libmachine: (multinode-370491-m02)     
	I1030 23:25:58.212054  229016 main.go:141] libmachine: (multinode-370491-m02)   </devices>
	I1030 23:25:58.212072  229016 main.go:141] libmachine: (multinode-370491-m02) </domain>
	I1030 23:25:58.212089  229016 main.go:141] libmachine: (multinode-370491-m02) 
	I1030 23:25:58.218955  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:21:c5:1e in network default
	I1030 23:25:58.219557  229016 main.go:141] libmachine: (multinode-370491-m02) Ensuring networks are active...
	I1030 23:25:58.219585  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:25:58.220315  229016 main.go:141] libmachine: (multinode-370491-m02) Ensuring network default is active
	I1030 23:25:58.220702  229016 main.go:141] libmachine: (multinode-370491-m02) Ensuring network mk-multinode-370491 is active
	I1030 23:25:58.221135  229016 main.go:141] libmachine: (multinode-370491-m02) Getting domain xml...
	I1030 23:25:58.221917  229016 main.go:141] libmachine: (multinode-370491-m02) Creating domain...
	I1030 23:25:59.470232  229016 main.go:141] libmachine: (multinode-370491-m02) Waiting to get IP...
	I1030 23:25:59.470966  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:25:59.471313  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:25:59.471384  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:25:59.471310  229377 retry.go:31] will retry after 242.12185ms: waiting for machine to come up
	I1030 23:25:59.715720  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:25:59.716245  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:25:59.716272  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:25:59.716192  229377 retry.go:31] will retry after 342.085332ms: waiting for machine to come up
	I1030 23:26:00.059813  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:00.060232  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:00.060280  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:00.060193  229377 retry.go:31] will retry after 297.871976ms: waiting for machine to come up
	I1030 23:26:00.359794  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:00.360252  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:00.360278  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:00.360213  229377 retry.go:31] will retry after 466.897548ms: waiting for machine to come up
	I1030 23:26:00.828920  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:00.829419  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:00.829448  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:00.829371  229377 retry.go:31] will retry after 682.210433ms: waiting for machine to come up
	I1030 23:26:01.513496  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:01.514050  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:01.514084  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:01.514004  229377 retry.go:31] will retry after 929.275415ms: waiting for machine to come up
	I1030 23:26:02.445135  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:02.445579  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:02.445609  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:02.445532  229377 retry.go:31] will retry after 1.056152244s: waiting for machine to come up
	I1030 23:26:03.503188  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:03.503649  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:03.503678  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:03.503576  229377 retry.go:31] will retry after 1.434598173s: waiting for machine to come up
	I1030 23:26:04.940241  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:04.940687  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:04.940729  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:04.940595  229377 retry.go:31] will retry after 1.447971639s: waiting for machine to come up
	I1030 23:26:06.390291  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:06.390713  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:06.390749  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:06.390652  229377 retry.go:31] will retry after 1.990794023s: waiting for machine to come up
	I1030 23:26:08.384236  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:08.385231  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:08.385267  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:08.385165  229377 retry.go:31] will retry after 1.806167838s: waiting for machine to come up
	I1030 23:26:10.194377  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:10.194887  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:10.194924  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:10.194850  229377 retry.go:31] will retry after 3.16555746s: waiting for machine to come up
	I1030 23:26:13.361780  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:13.362137  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:13.362167  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:13.362087  229377 retry.go:31] will retry after 3.725375426s: waiting for machine to come up
	I1030 23:26:17.091917  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:17.092324  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find current IP address of domain multinode-370491-m02 in network mk-multinode-370491
	I1030 23:26:17.092345  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | I1030 23:26:17.092291  229377 retry.go:31] will retry after 5.453742745s: waiting for machine to come up
	I1030 23:26:22.551255  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:22.551751  229016 main.go:141] libmachine: (multinode-370491-m02) Found IP for machine: 192.168.39.85
	I1030 23:26:22.551770  229016 main.go:141] libmachine: (multinode-370491-m02) Reserving static IP address...
	I1030 23:26:22.551782  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has current primary IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:22.552204  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | unable to find host DHCP lease matching {name: "multinode-370491-m02", mac: "52:54:00:a1:1d:9c", ip: "192.168.39.85"} in network mk-multinode-370491
	I1030 23:26:22.625487  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Getting to WaitForSSH function...
	I1030 23:26:22.625531  229016 main.go:141] libmachine: (multinode-370491-m02) Reserved static IP address: 192.168.39.85
	I1030 23:26:22.625547  229016 main.go:141] libmachine: (multinode-370491-m02) Waiting for SSH to be available...
	I1030 23:26:22.628089  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:22.628647  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:22.628689  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:22.628716  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Using SSH client type: external
	I1030 23:26:22.628742  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa (-rw-------)
	I1030 23:26:22.628779  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 23:26:22.628795  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | About to run SSH command:
	I1030 23:26:22.628810  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | exit 0
	I1030 23:26:22.728919  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | SSH cmd err, output: <nil>: 
	I1030 23:26:22.729183  229016 main.go:141] libmachine: (multinode-370491-m02) KVM machine creation complete!
	I1030 23:26:22.729572  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetConfigRaw
	I1030 23:26:22.730231  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:26:22.730468  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:26:22.730667  229016 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1030 23:26:22.730685  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetState
	I1030 23:26:22.732129  229016 main.go:141] libmachine: Detecting operating system of created instance...
	I1030 23:26:22.732150  229016 main.go:141] libmachine: Waiting for SSH to be available...
	I1030 23:26:22.732159  229016 main.go:141] libmachine: Getting to WaitForSSH function...
	I1030 23:26:22.732167  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:22.734925  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:22.735488  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:22.735523  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:22.735673  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:22.735896  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:22.736103  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:22.736238  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:22.736404  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:26:22.736825  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:26:22.736840  229016 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1030 23:26:22.868314  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:26:22.868339  229016 main.go:141] libmachine: Detecting the provisioner...
	I1030 23:26:22.868348  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:22.871376  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:22.871750  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:22.871785  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:22.872007  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:22.872257  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:22.872463  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:22.872606  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:22.872764  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:26:22.873179  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:26:22.873192  229016 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1030 23:26:23.009989  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gea8740b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1030 23:26:23.010074  229016 main.go:141] libmachine: found compatible host: buildroot
	I1030 23:26:23.010081  229016 main.go:141] libmachine: Provisioning with buildroot...
	I1030 23:26:23.010091  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetMachineName
	I1030 23:26:23.010409  229016 buildroot.go:166] provisioning hostname "multinode-370491-m02"
	I1030 23:26:23.010438  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetMachineName
	I1030 23:26:23.010587  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:23.013403  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.013784  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:23.013809  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.013982  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:23.014174  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:23.014332  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:23.014467  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:23.014608  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:26:23.014939  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:26:23.014959  229016 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-370491-m02 && echo "multinode-370491-m02" | sudo tee /etc/hostname
	I1030 23:26:23.160787  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370491-m02
	
	I1030 23:26:23.160836  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:23.163945  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.164331  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:23.164360  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.164593  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:23.164819  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:23.165015  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:23.165144  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:23.165347  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:26:23.165673  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:26:23.165694  229016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-370491-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-370491-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-370491-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 23:26:23.304391  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:26:23.304424  229016 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1030 23:26:23.304442  229016 buildroot.go:174] setting up certificates
	I1030 23:26:23.304454  229016 provision.go:83] configureAuth start
	I1030 23:26:23.304464  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetMachineName
	I1030 23:26:23.304747  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetIP
	I1030 23:26:23.307536  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.307872  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:23.307909  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.307996  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:23.310127  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.310472  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:23.310501  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.310689  229016 provision.go:138] copyHostCerts
	I1030 23:26:23.310724  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:26:23.310758  229016 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1030 23:26:23.310767  229016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:26:23.310828  229016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1030 23:26:23.310937  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:26:23.310954  229016 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1030 23:26:23.310958  229016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:26:23.310988  229016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1030 23:26:23.311033  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:26:23.311053  229016 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1030 23:26:23.311060  229016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:26:23.311080  229016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1030 23:26:23.311132  229016 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.multinode-370491-m02 san=[192.168.39.85 192.168.39.85 localhost 127.0.0.1 minikube multinode-370491-m02]
	I1030 23:26:23.451295  229016 provision.go:172] copyRemoteCerts
	I1030 23:26:23.451351  229016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 23:26:23.451386  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:23.454204  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.454525  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:23.454554  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.454723  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:23.454870  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:23.455036  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:23.455295  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:26:23.550048  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 23:26:23.550120  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1030 23:26:23.573583  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 23:26:23.573645  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1030 23:26:23.596242  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 23:26:23.596302  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 23:26:23.620131  229016 provision.go:86] duration metric: configureAuth took 315.651432ms
	I1030 23:26:23.620156  229016 buildroot.go:189] setting minikube options for container-runtime
	I1030 23:26:23.620369  229016 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:26:23.620466  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:23.622943  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.623284  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:23.623324  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.623573  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:23.623782  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:23.623998  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:23.624161  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:23.624360  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:26:23.624830  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:26:23.624855  229016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 23:26:23.956383  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 23:26:23.956423  229016 main.go:141] libmachine: Checking connection to Docker...
	I1030 23:26:23.956437  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetURL
	I1030 23:26:23.957690  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | Using libvirt version 6000000
	I1030 23:26:23.959869  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.960173  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:23.960197  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.960359  229016 main.go:141] libmachine: Docker is up and running!
	I1030 23:26:23.960373  229016 main.go:141] libmachine: Reticulating splines...
	I1030 23:26:23.960380  229016 client.go:171] LocalClient.Create took 26.103523232s
	I1030 23:26:23.960407  229016 start.go:167] duration metric: libmachine.API.Create for "multinode-370491" took 26.103590177s
	I1030 23:26:23.960421  229016 start.go:300] post-start starting for "multinode-370491-m02" (driver="kvm2")
	I1030 23:26:23.960434  229016 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 23:26:23.960459  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:26:23.960732  229016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 23:26:23.960761  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:23.963229  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.963591  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:23.963617  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:23.963782  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:23.963953  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:23.964085  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:23.964199  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:26:24.062903  229016 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 23:26:24.067552  229016 command_runner.go:130] > NAME=Buildroot
	I1030 23:26:24.067584  229016 command_runner.go:130] > VERSION=2021.02.12-1-gea8740b-dirty
	I1030 23:26:24.067592  229016 command_runner.go:130] > ID=buildroot
	I1030 23:26:24.067601  229016 command_runner.go:130] > VERSION_ID=2021.02.12
	I1030 23:26:24.067610  229016 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1030 23:26:24.067646  229016 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 23:26:24.067666  229016 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1030 23:26:24.067754  229016 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1030 23:26:24.067886  229016 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1030 23:26:24.067902  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /etc/ssl/certs/2160052.pem
	I1030 23:26:24.068034  229016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 23:26:24.076721  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:26:24.099938  229016 start.go:303] post-start completed in 139.499643ms
	I1030 23:26:24.099993  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetConfigRaw
	I1030 23:26:24.100630  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetIP
	I1030 23:26:24.103649  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.104146  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:24.104183  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.104570  229016 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:26:24.104764  229016 start.go:128] duration metric: createHost completed in 26.266040667s
	I1030 23:26:24.104790  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:24.107125  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.107569  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:24.107602  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.107688  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:24.107901  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:24.108078  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:24.108241  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:24.108442  229016 main.go:141] libmachine: Using SSH client type: native
	I1030 23:26:24.108772  229016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:26:24.108786  229016 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1030 23:26:24.246685  229016 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698708384.219496352
	
	I1030 23:26:24.246715  229016 fix.go:206] guest clock: 1698708384.219496352
	I1030 23:26:24.246724  229016 fix.go:219] Guest: 2023-10-30 23:26:24.219496352 +0000 UTC Remote: 2023-10-30 23:26:24.104776717 +0000 UTC m=+92.670354481 (delta=114.719635ms)
	I1030 23:26:24.246748  229016 fix.go:190] guest clock delta is within tolerance: 114.719635ms
	I1030 23:26:24.246755  229016 start.go:83] releasing machines lock for "multinode-370491-m02", held for 26.408106083s
	I1030 23:26:24.246787  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:26:24.247148  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetIP
	I1030 23:26:24.250295  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.250668  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:24.250701  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.253134  229016 out.go:177] * Found network options:
	I1030 23:26:24.254666  229016 out.go:177]   - NO_PROXY=192.168.39.231
	W1030 23:26:24.255985  229016 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 23:26:24.256027  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:26:24.256795  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:26:24.257071  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:26:24.257182  229016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 23:26:24.257225  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	W1030 23:26:24.257503  229016 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 23:26:24.257597  229016 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 23:26:24.257619  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:26:24.260390  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.260728  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.260869  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:24.260903  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.261100  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:24.261120  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:24.261151  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:24.261337  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:24.261431  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:26:24.261502  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:24.261583  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:26:24.261650  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:26:24.261740  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:26:24.261879  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:26:24.536572  229016 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1030 23:26:24.536573  229016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1030 23:26:24.543246  229016 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1030 23:26:24.543326  229016 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 23:26:24.543403  229016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 23:26:24.558969  229016 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1030 23:26:24.559050  229016 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 23:26:24.559060  229016 start.go:472] detecting cgroup driver to use...
	I1030 23:26:24.559134  229016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 23:26:24.576008  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 23:26:24.589464  229016 docker.go:198] disabling cri-docker service (if available) ...
	I1030 23:26:24.589526  229016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 23:26:24.603563  229016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 23:26:24.616570  229016 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 23:26:24.721739  229016 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1030 23:26:24.721818  229016 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 23:26:24.838535  229016 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1030 23:26:24.838572  229016 docker.go:214] disabling docker service ...
	I1030 23:26:24.838618  229016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 23:26:24.851456  229016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 23:26:24.862889  229016 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1030 23:26:24.862986  229016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 23:26:24.973002  229016 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1030 23:26:24.973113  229016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 23:26:25.081661  229016 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1030 23:26:25.081699  229016 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1030 23:26:25.081779  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 23:26:25.094646  229016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 23:26:25.112267  229016 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1030 23:26:25.112320  229016 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1030 23:26:25.112370  229016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:26:25.121686  229016 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 23:26:25.121769  229016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:26:25.131181  229016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:26:25.140528  229016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:26:25.150256  229016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 23:26:25.160120  229016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 23:26:25.168641  229016 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 23:26:25.168786  229016 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 23:26:25.168855  229016 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 23:26:25.183176  229016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 23:26:25.192751  229016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 23:26:25.307189  229016 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 23:26:25.486736  229016 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 23:26:25.486819  229016 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 23:26:25.491775  229016 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1030 23:26:25.491806  229016 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1030 23:26:25.491818  229016 command_runner.go:130] > Device: 16h/22d	Inode: 742         Links: 1
	I1030 23:26:25.491829  229016 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:26:25.491837  229016 command_runner.go:130] > Access: 2023-10-30 23:26:25.444559747 +0000
	I1030 23:26:25.491846  229016 command_runner.go:130] > Modify: 2023-10-30 23:26:25.444559747 +0000
	I1030 23:26:25.491868  229016 command_runner.go:130] > Change: 2023-10-30 23:26:25.444559747 +0000
	I1030 23:26:25.491880  229016 command_runner.go:130] >  Birth: -
	I1030 23:26:25.491939  229016 start.go:540] Will wait 60s for crictl version
	I1030 23:26:25.492008  229016 ssh_runner.go:195] Run: which crictl
	I1030 23:26:25.495835  229016 command_runner.go:130] > /usr/bin/crictl
	I1030 23:26:25.495902  229016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 23:26:25.535971  229016 command_runner.go:130] > Version:  0.1.0
	I1030 23:26:25.536004  229016 command_runner.go:130] > RuntimeName:  cri-o
	I1030 23:26:25.536032  229016 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1030 23:26:25.536584  229016 command_runner.go:130] > RuntimeApiVersion:  v1
	I1030 23:26:25.538668  229016 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1030 23:26:25.538760  229016 ssh_runner.go:195] Run: crio --version
	I1030 23:26:25.591114  229016 command_runner.go:130] > crio version 1.24.1
	I1030 23:26:25.591135  229016 command_runner.go:130] > Version:          1.24.1
	I1030 23:26:25.591141  229016 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:26:25.591146  229016 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:26:25.591152  229016 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:26:25.591157  229016 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:26:25.591161  229016 command_runner.go:130] > Compiler:         gc
	I1030 23:26:25.591166  229016 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:26:25.591171  229016 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:26:25.591178  229016 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:26:25.591183  229016 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:26:25.591188  229016 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:26:25.591419  229016 ssh_runner.go:195] Run: crio --version
	I1030 23:26:25.639285  229016 command_runner.go:130] > crio version 1.24.1
	I1030 23:26:25.639320  229016 command_runner.go:130] > Version:          1.24.1
	I1030 23:26:25.639333  229016 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:26:25.639340  229016 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:26:25.639350  229016 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:26:25.639356  229016 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:26:25.639361  229016 command_runner.go:130] > Compiler:         gc
	I1030 23:26:25.639369  229016 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:26:25.639376  229016 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:26:25.639387  229016 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:26:25.639394  229016 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:26:25.639402  229016 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:26:25.642666  229016 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1030 23:26:25.644124  229016 out.go:177]   - env NO_PROXY=192.168.39.231
	I1030 23:26:25.645529  229016 main.go:141] libmachine: (multinode-370491-m02) Calling .GetIP
	I1030 23:26:25.647862  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:25.648253  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:26:25.648291  229016 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:26:25.648439  229016 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 23:26:25.653088  229016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:26:25.665600  229016 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491 for IP: 192.168.39.85
	I1030 23:26:25.665628  229016 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:26:25.665858  229016 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1030 23:26:25.665920  229016 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1030 23:26:25.665937  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 23:26:25.665950  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 23:26:25.665963  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 23:26:25.665975  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 23:26:25.666029  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1030 23:26:25.666058  229016 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1030 23:26:25.666070  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 23:26:25.666103  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1030 23:26:25.666125  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1030 23:26:25.666146  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1030 23:26:25.666183  229016 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:26:25.666209  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:26:25.666222  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem -> /usr/share/ca-certificates/216005.pem
	I1030 23:26:25.666234  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /usr/share/ca-certificates/2160052.pem
	I1030 23:26:25.666553  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 23:26:25.689074  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 23:26:25.711401  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 23:26:25.734826  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1030 23:26:25.758185  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 23:26:25.781304  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1030 23:26:25.809094  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1030 23:26:25.832039  229016 ssh_runner.go:195] Run: openssl version
	I1030 23:26:25.837301  229016 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1030 23:26:25.837400  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1030 23:26:25.846247  229016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1030 23:26:25.850605  229016 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:26:25.850634  229016 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:26:25.850673  229016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1030 23:26:25.855660  229016 command_runner.go:130] > 51391683
	I1030 23:26:25.856024  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1030 23:26:25.865182  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1030 23:26:25.874143  229016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1030 23:26:25.878458  229016 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:26:25.878545  229016 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:26:25.878601  229016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1030 23:26:25.883513  229016 command_runner.go:130] > 3ec20f2e
	I1030 23:26:25.883664  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 23:26:25.892558  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 23:26:25.901605  229016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:26:25.905899  229016 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:26:25.906068  229016 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:26:25.906112  229016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:26:25.911149  229016 command_runner.go:130] > b5213941
	I1030 23:26:25.911440  229016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 23:26:25.920765  229016 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1030 23:26:25.924588  229016 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:26:25.924903  229016 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:26:25.925026  229016 ssh_runner.go:195] Run: crio config
	I1030 23:26:25.981212  229016 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1030 23:26:25.981248  229016 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1030 23:26:25.981260  229016 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1030 23:26:25.981263  229016 command_runner.go:130] > #
	I1030 23:26:25.981271  229016 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1030 23:26:25.981278  229016 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1030 23:26:25.981289  229016 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1030 23:26:25.981305  229016 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1030 23:26:25.981316  229016 command_runner.go:130] > # reload'.
	I1030 23:26:25.981327  229016 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1030 23:26:25.981340  229016 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1030 23:26:25.981354  229016 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1030 23:26:25.981363  229016 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1030 23:26:25.981368  229016 command_runner.go:130] > [crio]
	I1030 23:26:25.981383  229016 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1030 23:26:25.981395  229016 command_runner.go:130] > # containers images, in this directory.
	I1030 23:26:25.981429  229016 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1030 23:26:25.981444  229016 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1030 23:26:25.981813  229016 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1030 23:26:25.981836  229016 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1030 23:26:25.981864  229016 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1030 23:26:25.982156  229016 command_runner.go:130] > storage_driver = "overlay"
	I1030 23:26:25.982175  229016 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1030 23:26:25.982186  229016 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1030 23:26:25.982193  229016 command_runner.go:130] > storage_option = [
	I1030 23:26:25.982294  229016 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1030 23:26:25.982872  229016 command_runner.go:130] > ]
	I1030 23:26:25.982895  229016 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1030 23:26:25.982906  229016 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1030 23:26:25.983074  229016 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1030 23:26:25.983091  229016 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1030 23:26:25.983103  229016 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1030 23:26:25.983111  229016 command_runner.go:130] > # always happen on a node reboot
	I1030 23:26:25.983750  229016 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1030 23:26:25.983767  229016 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1030 23:26:25.983777  229016 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1030 23:26:25.983793  229016 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1030 23:26:25.984263  229016 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1030 23:26:25.984277  229016 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1030 23:26:25.984286  229016 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1030 23:26:25.985040  229016 command_runner.go:130] > # internal_wipe = true
	I1030 23:26:25.985056  229016 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1030 23:26:25.985068  229016 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1030 23:26:25.985083  229016 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1030 23:26:25.985705  229016 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1030 23:26:25.985726  229016 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1030 23:26:25.985732  229016 command_runner.go:130] > [crio.api]
	I1030 23:26:25.985737  229016 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1030 23:26:25.985742  229016 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1030 23:26:25.985749  229016 command_runner.go:130] > # IP address on which the stream server will listen.
	I1030 23:26:25.985769  229016 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1030 23:26:25.985781  229016 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1030 23:26:25.985790  229016 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1030 23:26:25.985801  229016 command_runner.go:130] > # stream_port = "0"
	I1030 23:26:25.985812  229016 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1030 23:26:25.986008  229016 command_runner.go:130] > # stream_enable_tls = false
	I1030 23:26:25.986028  229016 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1030 23:26:25.986035  229016 command_runner.go:130] > # stream_idle_timeout = ""
	I1030 23:26:25.986045  229016 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1030 23:26:25.986060  229016 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1030 23:26:25.986069  229016 command_runner.go:130] > # minutes.
	I1030 23:26:25.986081  229016 command_runner.go:130] > # stream_tls_cert = ""
	I1030 23:26:25.986094  229016 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1030 23:26:25.986108  229016 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1030 23:26:25.986118  229016 command_runner.go:130] > # stream_tls_key = ""
	I1030 23:26:25.986131  229016 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1030 23:26:25.986145  229016 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1030 23:26:25.986157  229016 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1030 23:26:25.986166  229016 command_runner.go:130] > # stream_tls_ca = ""
	I1030 23:26:25.986185  229016 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:26:25.986219  229016 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1030 23:26:25.986236  229016 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:26:25.986244  229016 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1030 23:26:25.986281  229016 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1030 23:26:25.986295  229016 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1030 23:26:25.986307  229016 command_runner.go:130] > [crio.runtime]
	I1030 23:26:25.986320  229016 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1030 23:26:25.986332  229016 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1030 23:26:25.986342  229016 command_runner.go:130] > # "nofile=1024:2048"
	I1030 23:26:25.986353  229016 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1030 23:26:25.986360  229016 command_runner.go:130] > # default_ulimits = [
	I1030 23:26:25.986366  229016 command_runner.go:130] > # ]
	I1030 23:26:25.986377  229016 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1030 23:26:25.986386  229016 command_runner.go:130] > # no_pivot = false
	I1030 23:26:25.986397  229016 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1030 23:26:25.986412  229016 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1030 23:26:25.986434  229016 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1030 23:26:25.986447  229016 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1030 23:26:25.986458  229016 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1030 23:26:25.986473  229016 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:26:25.986483  229016 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1030 23:26:25.986494  229016 command_runner.go:130] > # Cgroup setting for conmon
	I1030 23:26:25.986508  229016 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1030 23:26:25.986518  229016 command_runner.go:130] > conmon_cgroup = "pod"
	I1030 23:26:25.986532  229016 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1030 23:26:25.986545  229016 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1030 23:26:25.986560  229016 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:26:25.986575  229016 command_runner.go:130] > conmon_env = [
	I1030 23:26:25.986590  229016 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 23:26:25.986599  229016 command_runner.go:130] > ]
	I1030 23:26:25.986609  229016 command_runner.go:130] > # Additional environment variables to set for all the
	I1030 23:26:25.986621  229016 command_runner.go:130] > # containers. These are overridden if set in the
	I1030 23:26:25.986634  229016 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1030 23:26:25.986646  229016 command_runner.go:130] > # default_env = [
	I1030 23:26:25.986651  229016 command_runner.go:130] > # ]
	I1030 23:26:25.986665  229016 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1030 23:26:25.986675  229016 command_runner.go:130] > # selinux = false
	I1030 23:26:25.986688  229016 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1030 23:26:25.986702  229016 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1030 23:26:25.986714  229016 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1030 23:26:25.986724  229016 command_runner.go:130] > # seccomp_profile = ""
	I1030 23:26:25.986732  229016 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1030 23:26:25.986743  229016 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1030 23:26:25.986781  229016 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1030 23:26:25.986792  229016 command_runner.go:130] > # which might increase security.
	I1030 23:26:25.986803  229016 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1030 23:26:25.986816  229016 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1030 23:26:25.986829  229016 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1030 23:26:25.986842  229016 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1030 23:26:25.986853  229016 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1030 23:26:25.986865  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:26:25.986876  229016 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1030 23:26:25.986891  229016 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1030 23:26:25.986902  229016 command_runner.go:130] > # the cgroup blockio controller.
	I1030 23:26:25.986910  229016 command_runner.go:130] > # blockio_config_file = ""
	I1030 23:26:25.986922  229016 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1030 23:26:25.986932  229016 command_runner.go:130] > # irqbalance daemon.
	I1030 23:26:25.986942  229016 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1030 23:26:25.986957  229016 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1030 23:26:25.986970  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:26:25.986980  229016 command_runner.go:130] > # rdt_config_file = ""
	I1030 23:26:25.986991  229016 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1030 23:26:25.987002  229016 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1030 23:26:25.987018  229016 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1030 23:26:25.987031  229016 command_runner.go:130] > # separate_pull_cgroup = ""
	I1030 23:26:25.987042  229016 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1030 23:26:25.987055  229016 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1030 23:26:25.987066  229016 command_runner.go:130] > # will be added.
	I1030 23:26:25.987074  229016 command_runner.go:130] > # default_capabilities = [
	I1030 23:26:25.987080  229016 command_runner.go:130] > # 	"CHOWN",
	I1030 23:26:25.987087  229016 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1030 23:26:25.987094  229016 command_runner.go:130] > # 	"FSETID",
	I1030 23:26:25.987102  229016 command_runner.go:130] > # 	"FOWNER",
	I1030 23:26:25.987109  229016 command_runner.go:130] > # 	"SETGID",
	I1030 23:26:25.987120  229016 command_runner.go:130] > # 	"SETUID",
	I1030 23:26:25.987128  229016 command_runner.go:130] > # 	"SETPCAP",
	I1030 23:26:25.987139  229016 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1030 23:26:25.987147  229016 command_runner.go:130] > # 	"KILL",
	I1030 23:26:25.987153  229016 command_runner.go:130] > # ]
	I1030 23:26:25.987167  229016 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1030 23:26:25.987180  229016 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:26:25.987191  229016 command_runner.go:130] > # default_sysctls = [
	I1030 23:26:25.987198  229016 command_runner.go:130] > # ]
	I1030 23:26:25.987207  229016 command_runner.go:130] > # List of devices on the host that a
	I1030 23:26:25.987222  229016 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1030 23:26:25.987232  229016 command_runner.go:130] > # allowed_devices = [
	I1030 23:26:25.987238  229016 command_runner.go:130] > # 	"/dev/fuse",
	I1030 23:26:25.987248  229016 command_runner.go:130] > # ]
	I1030 23:26:25.987256  229016 command_runner.go:130] > # List of additional devices. specified as
	I1030 23:26:25.987269  229016 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1030 23:26:25.987281  229016 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1030 23:26:25.987308  229016 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:26:25.987319  229016 command_runner.go:130] > # additional_devices = [
	I1030 23:26:25.987328  229016 command_runner.go:130] > # ]
	I1030 23:26:25.987340  229016 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1030 23:26:25.987350  229016 command_runner.go:130] > # cdi_spec_dirs = [
	I1030 23:26:25.987356  229016 command_runner.go:130] > # 	"/etc/cdi",
	I1030 23:26:25.987364  229016 command_runner.go:130] > # 	"/var/run/cdi",
	I1030 23:26:25.987371  229016 command_runner.go:130] > # ]
	I1030 23:26:25.987388  229016 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1030 23:26:25.987402  229016 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1030 23:26:25.987413  229016 command_runner.go:130] > # Defaults to false.
	I1030 23:26:25.987430  229016 command_runner.go:130] > # device_ownership_from_security_context = false
	I1030 23:26:25.987444  229016 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1030 23:26:25.987457  229016 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1030 23:26:25.987463  229016 command_runner.go:130] > # hooks_dir = [
	I1030 23:26:25.987475  229016 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1030 23:26:25.987484  229016 command_runner.go:130] > # ]
	I1030 23:26:25.987496  229016 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1030 23:26:25.987511  229016 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1030 23:26:25.987524  229016 command_runner.go:130] > # its default mounts from the following two files:
	I1030 23:26:25.987533  229016 command_runner.go:130] > #
	I1030 23:26:25.987549  229016 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1030 23:26:25.987565  229016 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1030 23:26:25.987579  229016 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1030 23:26:25.987589  229016 command_runner.go:130] > #
	I1030 23:26:25.987603  229016 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1030 23:26:25.987619  229016 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1030 23:26:25.987634  229016 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1030 23:26:25.987647  229016 command_runner.go:130] > #      only add mounts it finds in this file.
	I1030 23:26:25.987653  229016 command_runner.go:130] > #
	I1030 23:26:25.987660  229016 command_runner.go:130] > # default_mounts_file = ""
	I1030 23:26:25.987669  229016 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1030 23:26:25.987684  229016 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1030 23:26:25.987694  229016 command_runner.go:130] > pids_limit = 1024
	I1030 23:26:25.987704  229016 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1030 23:26:25.987719  229016 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1030 23:26:25.987733  229016 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1030 23:26:25.987751  229016 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1030 23:26:25.987793  229016 command_runner.go:130] > # log_size_max = -1
	I1030 23:26:25.987808  229016 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1030 23:26:25.987815  229016 command_runner.go:130] > # log_to_journald = false
	I1030 23:26:25.987828  229016 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1030 23:26:25.987840  229016 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1030 23:26:25.987852  229016 command_runner.go:130] > # Path to directory for container attach sockets.
	I1030 23:26:25.987866  229016 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1030 23:26:25.987877  229016 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1030 23:26:25.987884  229016 command_runner.go:130] > # bind_mount_prefix = ""
	I1030 23:26:25.987898  229016 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1030 23:26:25.987905  229016 command_runner.go:130] > # read_only = false
	I1030 23:26:25.987914  229016 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1030 23:26:25.987925  229016 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1030 23:26:25.987936  229016 command_runner.go:130] > # live configuration reload.
	I1030 23:26:25.987945  229016 command_runner.go:130] > # log_level = "info"
	I1030 23:26:25.987953  229016 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1030 23:26:25.987969  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:26:25.987979  229016 command_runner.go:130] > # log_filter = ""
	I1030 23:26:25.987988  229016 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1030 23:26:25.988000  229016 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1030 23:26:25.988010  229016 command_runner.go:130] > # separated by comma.
	I1030 23:26:25.988016  229016 command_runner.go:130] > # uid_mappings = ""
	I1030 23:26:25.988025  229016 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1030 23:26:25.988038  229016 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1030 23:26:25.988048  229016 command_runner.go:130] > # separated by comma.
	I1030 23:26:25.988058  229016 command_runner.go:130] > # gid_mappings = ""
	I1030 23:26:25.988071  229016 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1030 23:26:25.988085  229016 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:26:25.988100  229016 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:26:25.988110  229016 command_runner.go:130] > # minimum_mappable_uid = -1
	I1030 23:26:25.988124  229016 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1030 23:26:25.988137  229016 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:26:25.988151  229016 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:26:25.988161  229016 command_runner.go:130] > # minimum_mappable_gid = -1
	I1030 23:26:25.988174  229016 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1030 23:26:25.988187  229016 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1030 23:26:25.988199  229016 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1030 23:26:25.988209  229016 command_runner.go:130] > # ctr_stop_timeout = 30
	I1030 23:26:25.988218  229016 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1030 23:26:25.988227  229016 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1030 23:26:25.988239  229016 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1030 23:26:25.988251  229016 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1030 23:26:25.988263  229016 command_runner.go:130] > drop_infra_ctr = false
	I1030 23:26:25.988276  229016 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1030 23:26:25.988288  229016 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1030 23:26:25.988298  229016 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1030 23:26:25.988305  229016 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1030 23:26:25.988310  229016 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1030 23:26:25.988316  229016 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1030 23:26:25.988323  229016 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1030 23:26:25.988329  229016 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1030 23:26:25.988336  229016 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1030 23:26:25.988342  229016 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 23:26:25.988350  229016 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1030 23:26:25.988357  229016 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1030 23:26:25.988363  229016 command_runner.go:130] > # default_runtime = "runc"
	I1030 23:26:25.988369  229016 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1030 23:26:25.988378  229016 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1030 23:26:25.988389  229016 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1030 23:26:25.988396  229016 command_runner.go:130] > # creation as a file is not desired either.
	I1030 23:26:25.988404  229016 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1030 23:26:25.988414  229016 command_runner.go:130] > # the hostname is being managed dynamically.
	I1030 23:26:25.988428  229016 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1030 23:26:25.988437  229016 command_runner.go:130] > # ]
	I1030 23:26:25.988448  229016 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1030 23:26:25.988462  229016 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1030 23:26:25.988476  229016 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1030 23:26:25.988486  229016 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1030 23:26:25.988495  229016 command_runner.go:130] > #
	I1030 23:26:25.988503  229016 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1030 23:26:25.988514  229016 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1030 23:26:25.988521  229016 command_runner.go:130] > #  runtime_type = "oci"
	I1030 23:26:25.988532  229016 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1030 23:26:25.988574  229016 command_runner.go:130] > #  privileged_without_host_devices = false
	I1030 23:26:25.988585  229016 command_runner.go:130] > #  allowed_annotations = []
	I1030 23:26:25.988594  229016 command_runner.go:130] > # Where:
	I1030 23:26:25.988607  229016 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1030 23:26:25.988621  229016 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1030 23:26:25.988632  229016 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1030 23:26:25.988645  229016 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1030 23:26:25.988651  229016 command_runner.go:130] > #   in $PATH.
	I1030 23:26:25.988666  229016 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1030 23:26:25.988675  229016 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1030 23:26:25.988688  229016 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1030 23:26:25.988698  229016 command_runner.go:130] > #   state.
	I1030 23:26:25.988708  229016 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1030 23:26:25.988721  229016 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1030 23:26:25.988736  229016 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1030 23:26:25.988745  229016 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1030 23:26:25.988755  229016 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1030 23:26:25.988765  229016 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1030 23:26:25.988774  229016 command_runner.go:130] > #   The currently recognized values are:
	I1030 23:26:25.988785  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1030 23:26:25.988797  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1030 23:26:25.988810  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1030 23:26:25.988823  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1030 23:26:25.988836  229016 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1030 23:26:25.988851  229016 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1030 23:26:25.988862  229016 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1030 23:26:25.988878  229016 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1030 23:26:25.988889  229016 command_runner.go:130] > #   should be moved to the container's cgroup
	I1030 23:26:25.988899  229016 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1030 23:26:25.988907  229016 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1030 23:26:25.988912  229016 command_runner.go:130] > runtime_type = "oci"
	I1030 23:26:25.988919  229016 command_runner.go:130] > runtime_root = "/run/runc"
	I1030 23:26:25.988923  229016 command_runner.go:130] > runtime_config_path = ""
	I1030 23:26:25.988930  229016 command_runner.go:130] > monitor_path = ""
	I1030 23:26:25.988934  229016 command_runner.go:130] > monitor_cgroup = ""
	I1030 23:26:25.988956  229016 command_runner.go:130] > monitor_exec_cgroup = ""
	I1030 23:26:25.988970  229016 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1030 23:26:25.988981  229016 command_runner.go:130] > # running containers
	I1030 23:26:25.988989  229016 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1030 23:26:25.989003  229016 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1030 23:26:25.989032  229016 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1030 23:26:25.989043  229016 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1030 23:26:25.989053  229016 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1030 23:26:25.989059  229016 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1030 23:26:25.989064  229016 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1030 23:26:25.989069  229016 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1030 23:26:25.989075  229016 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1030 23:26:25.989080  229016 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1030 23:26:25.989089  229016 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1030 23:26:25.989094  229016 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1030 23:26:25.989103  229016 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1030 23:26:25.989110  229016 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1030 23:26:25.989120  229016 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1030 23:26:25.989129  229016 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1030 23:26:25.989138  229016 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1030 23:26:25.989148  229016 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1030 23:26:25.989156  229016 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1030 23:26:25.989164  229016 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1030 23:26:25.989171  229016 command_runner.go:130] > # Example:
	I1030 23:26:25.989176  229016 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1030 23:26:25.989183  229016 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1030 23:26:25.989189  229016 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1030 23:26:25.989196  229016 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1030 23:26:25.989200  229016 command_runner.go:130] > # cpuset = 0
	I1030 23:26:25.989206  229016 command_runner.go:130] > # cpushares = "0-1"
	I1030 23:26:25.989210  229016 command_runner.go:130] > # Where:
	I1030 23:26:25.989217  229016 command_runner.go:130] > # The workload name is workload-type.
	I1030 23:26:25.989224  229016 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1030 23:26:25.989232  229016 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1030 23:26:25.989237  229016 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1030 23:26:25.989269  229016 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1030 23:26:25.989282  229016 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1030 23:26:25.989288  229016 command_runner.go:130] > # 
	I1030 23:26:25.989303  229016 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1030 23:26:25.989312  229016 command_runner.go:130] > #
	I1030 23:26:25.989320  229016 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1030 23:26:25.989330  229016 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1030 23:26:25.989345  229016 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1030 23:26:25.989359  229016 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1030 23:26:25.989368  229016 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1030 23:26:25.989375  229016 command_runner.go:130] > [crio.image]
	I1030 23:26:25.989384  229016 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1030 23:26:25.989395  229016 command_runner.go:130] > # default_transport = "docker://"
	I1030 23:26:25.989409  229016 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1030 23:26:25.989429  229016 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:26:25.989439  229016 command_runner.go:130] > # global_auth_file = ""
	I1030 23:26:25.989449  229016 command_runner.go:130] > # The image used to instantiate infra containers.
	I1030 23:26:25.989461  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:26:25.989470  229016 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1030 23:26:25.989483  229016 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1030 23:26:25.989494  229016 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:26:25.989499  229016 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:26:25.989505  229016 command_runner.go:130] > # pause_image_auth_file = ""
	I1030 23:26:25.989511  229016 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1030 23:26:25.989519  229016 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1030 23:26:25.989525  229016 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1030 23:26:25.989534  229016 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1030 23:26:25.989539  229016 command_runner.go:130] > # pause_command = "/pause"
	I1030 23:26:25.989547  229016 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1030 23:26:25.989553  229016 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1030 23:26:25.989562  229016 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1030 23:26:25.989568  229016 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1030 23:26:25.989575  229016 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1030 23:26:25.989579  229016 command_runner.go:130] > # signature_policy = ""
	I1030 23:26:25.989588  229016 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1030 23:26:25.989594  229016 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1030 23:26:25.989600  229016 command_runner.go:130] > # changing them here.
	I1030 23:26:25.989604  229016 command_runner.go:130] > # insecure_registries = [
	I1030 23:26:25.989610  229016 command_runner.go:130] > # ]
	I1030 23:26:25.989616  229016 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1030 23:26:25.989624  229016 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1030 23:26:25.989628  229016 command_runner.go:130] > # image_volumes = "mkdir"
	I1030 23:26:25.989637  229016 command_runner.go:130] > # Temporary directory to use for storing big files
	I1030 23:26:25.989646  229016 command_runner.go:130] > # big_files_temporary_dir = ""
	I1030 23:26:25.989659  229016 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1030 23:26:25.989669  229016 command_runner.go:130] > # CNI plugins.
	I1030 23:26:25.989678  229016 command_runner.go:130] > [crio.network]
	I1030 23:26:25.989686  229016 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1030 23:26:25.989697  229016 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1030 23:26:25.989708  229016 command_runner.go:130] > # cni_default_network = ""
	I1030 23:26:25.989717  229016 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1030 23:26:25.989728  229016 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1030 23:26:25.989738  229016 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1030 23:26:25.989748  229016 command_runner.go:130] > # plugin_dirs = [
	I1030 23:26:25.989756  229016 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1030 23:26:25.989765  229016 command_runner.go:130] > # ]
	I1030 23:26:25.989776  229016 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1030 23:26:25.989786  229016 command_runner.go:130] > [crio.metrics]
	I1030 23:26:25.989794  229016 command_runner.go:130] > # Globally enable or disable metrics support.
	I1030 23:26:25.989803  229016 command_runner.go:130] > enable_metrics = true
	I1030 23:26:25.989811  229016 command_runner.go:130] > # Specify enabled metrics collectors.
	I1030 23:26:25.989823  229016 command_runner.go:130] > # Per default all metrics are enabled.
	I1030 23:26:25.989836  229016 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1030 23:26:25.989848  229016 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1030 23:26:25.989861  229016 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1030 23:26:25.989871  229016 command_runner.go:130] > # metrics_collectors = [
	I1030 23:26:25.989882  229016 command_runner.go:130] > # 	"operations",
	I1030 23:26:25.989893  229016 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1030 23:26:25.989905  229016 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1030 23:26:25.989915  229016 command_runner.go:130] > # 	"operations_errors",
	I1030 23:26:25.989922  229016 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1030 23:26:25.989933  229016 command_runner.go:130] > # 	"image_pulls_by_name",
	I1030 23:26:25.989942  229016 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1030 23:26:25.989953  229016 command_runner.go:130] > # 	"image_pulls_failures",
	I1030 23:26:25.989964  229016 command_runner.go:130] > # 	"image_pulls_successes",
	I1030 23:26:25.989975  229016 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1030 23:26:25.989984  229016 command_runner.go:130] > # 	"image_layer_reuse",
	I1030 23:26:25.989994  229016 command_runner.go:130] > # 	"containers_oom_total",
	I1030 23:26:25.990004  229016 command_runner.go:130] > # 	"containers_oom",
	I1030 23:26:25.990013  229016 command_runner.go:130] > # 	"processes_defunct",
	I1030 23:26:25.990023  229016 command_runner.go:130] > # 	"operations_total",
	I1030 23:26:25.990030  229016 command_runner.go:130] > # 	"operations_latency_seconds",
	I1030 23:26:25.990041  229016 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1030 23:26:25.990049  229016 command_runner.go:130] > # 	"operations_errors_total",
	I1030 23:26:25.990060  229016 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1030 23:26:25.990071  229016 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1030 23:26:25.990082  229016 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1030 23:26:25.990093  229016 command_runner.go:130] > # 	"image_pulls_success_total",
	I1030 23:26:25.990104  229016 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1030 23:26:25.990117  229016 command_runner.go:130] > # 	"containers_oom_count_total",
	I1030 23:26:25.990126  229016 command_runner.go:130] > # ]
	I1030 23:26:25.990135  229016 command_runner.go:130] > # The port on which the metrics server will listen.
	I1030 23:26:25.990146  229016 command_runner.go:130] > # metrics_port = 9090
	I1030 23:26:25.990157  229016 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1030 23:26:25.990168  229016 command_runner.go:130] > # metrics_socket = ""
	I1030 23:26:25.990179  229016 command_runner.go:130] > # The certificate for the secure metrics server.
	I1030 23:26:25.990188  229016 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1030 23:26:25.990196  229016 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1030 23:26:25.990203  229016 command_runner.go:130] > # certificate on any modification event.
	I1030 23:26:25.990208  229016 command_runner.go:130] > # metrics_cert = ""
	I1030 23:26:25.990215  229016 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1030 23:26:25.990221  229016 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1030 23:26:25.990227  229016 command_runner.go:130] > # metrics_key = ""
	I1030 23:26:25.990233  229016 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1030 23:26:25.990239  229016 command_runner.go:130] > [crio.tracing]
	I1030 23:26:25.990245  229016 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1030 23:26:25.990251  229016 command_runner.go:130] > # enable_tracing = false
	I1030 23:26:25.990257  229016 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1030 23:26:25.990264  229016 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1030 23:26:25.990269  229016 command_runner.go:130] > # Number of samples to collect per million spans.
	I1030 23:26:25.990276  229016 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1030 23:26:25.990282  229016 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1030 23:26:25.990288  229016 command_runner.go:130] > [crio.stats]
	I1030 23:26:25.990294  229016 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1030 23:26:25.990302  229016 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1030 23:26:25.990308  229016 command_runner.go:130] > # stats_collection_period = 0
	I1030 23:26:25.990353  229016 command_runner.go:130] ! time="2023-10-30 23:26:25.949064929Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1030 23:26:25.990366  229016 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1030 23:26:25.990438  229016 cni.go:84] Creating CNI manager for ""
	I1030 23:26:25.990450  229016 cni.go:136] 2 nodes found, recommending kindnet
	I1030 23:26:25.990463  229016 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1030 23:26:25.990492  229016 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.85 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-370491 NodeName:multinode-370491-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 23:26:25.990624  229016 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-370491-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 23:26:25.990676  229016 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-370491-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1030 23:26:25.990730  229016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1030 23:26:26.000421  229016 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1030 23:26:26.000492  229016 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1030 23:26:26.000563  229016 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1030 23:26:26.010310  229016 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1030 23:26:26.010347  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1030 23:26:26.010385  229016 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1030 23:26:26.010436  229016 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1030 23:26:26.010443  229016 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1030 23:26:26.018285  229016 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1030 23:26:26.018335  229016 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1030 23:26:26.018360  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1030 23:26:26.591608  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1030 23:26:26.591719  229016 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1030 23:26:26.597561  229016 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1030 23:26:26.597668  229016 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1030 23:26:26.597704  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1030 23:26:27.302585  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:26:27.315593  229016 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/linux/amd64/v1.28.3/kubelet -> /var/lib/minikube/binaries/v1.28.3/kubelet
	I1030 23:26:27.315725  229016 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet
	I1030 23:26:27.320209  229016 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1030 23:26:27.320250  229016 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1030 23:26:27.320276  229016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/linux/amd64/v1.28.3/kubelet --> /var/lib/minikube/binaries/v1.28.3/kubelet (110780416 bytes)
	I1030 23:26:27.853286  229016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1030 23:26:27.861737  229016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1030 23:26:27.877137  229016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 23:26:27.893085  229016 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I1030 23:26:27.896724  229016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:26:27.908639  229016 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:26:27.908909  229016 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:26:27.909087  229016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:26:27.909133  229016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:26:27.923840  229016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I1030 23:26:27.924295  229016 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:26:27.924756  229016 main.go:141] libmachine: Using API Version  1
	I1030 23:26:27.924779  229016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:26:27.925123  229016 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:26:27.925320  229016 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:26:27.925466  229016 start.go:304] JoinCluster: &{Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:26:27.925569  229016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 23:26:27.925584  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:26:27.928233  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:26:27.928651  229016 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:26:27.928680  229016 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:26:27.928799  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:26:27.928997  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:26:27.929150  229016 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:26:27.929297  229016 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:26:28.092501  229016 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token o84r12.hbopji1ctjydqqxj --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1030 23:26:28.092639  229016 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1030 23:26:28.092692  229016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o84r12.hbopji1ctjydqqxj --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-370491-m02"
	I1030 23:26:28.141447  229016 command_runner.go:130] > [preflight] Running pre-flight checks
	I1030 23:26:28.296246  229016 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1030 23:26:28.296277  229016 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1030 23:26:28.333884  229016 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 23:26:28.333987  229016 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 23:26:28.334215  229016 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1030 23:26:28.454875  229016 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1030 23:26:30.469326  229016 command_runner.go:130] > This node has joined the cluster:
	I1030 23:26:30.469353  229016 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1030 23:26:30.469362  229016 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1030 23:26:30.469369  229016 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1030 23:26:30.471397  229016 command_runner.go:130] ! W1030 23:26:28.119610     822 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1030 23:26:30.471425  229016 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1030 23:26:30.471450  229016 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token o84r12.hbopji1ctjydqqxj --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-370491-m02": (2.378742287s)
	I1030 23:26:30.471477  229016 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 23:26:30.612105  229016 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1030 23:26:30.730065  229016 start.go:306] JoinCluster complete in 2.804588345s
	I1030 23:26:30.730097  229016 cni.go:84] Creating CNI manager for ""
	I1030 23:26:30.730105  229016 cni.go:136] 2 nodes found, recommending kindnet
	I1030 23:26:30.730184  229016 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 23:26:30.745208  229016 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1030 23:26:30.745233  229016 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1030 23:26:30.745241  229016 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1030 23:26:30.745248  229016 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:26:30.745254  229016 command_runner.go:130] > Access: 2023-10-30 23:25:04.975208312 +0000
	I1030 23:26:30.745259  229016 command_runner.go:130] > Modify: 2023-10-30 22:33:43.000000000 +0000
	I1030 23:26:30.745263  229016 command_runner.go:130] > Change: 2023-10-30 23:25:03.217208312 +0000
	I1030 23:26:30.745268  229016 command_runner.go:130] >  Birth: -
	I1030 23:26:30.745550  229016 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1030 23:26:30.745573  229016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1030 23:26:30.779064  229016 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 23:26:31.080924  229016 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1030 23:26:31.085404  229016 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1030 23:26:31.088893  229016 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1030 23:26:31.102209  229016 command_runner.go:130] > daemonset.apps/kindnet configured
	I1030 23:26:31.105379  229016 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:26:31.105775  229016 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:26:31.106286  229016 round_trippers.go:463] GET https://192.168.39.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1030 23:26:31.106307  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:31.106317  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:31.106328  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:31.108926  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:31.108964  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:31.108974  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:31 GMT
	I1030 23:26:31.108980  229016 round_trippers.go:580]     Audit-Id: 44e281dc-42e5-4cc5-9c76-5c070547ea39
	I1030 23:26:31.108985  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:31.108990  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:31.108995  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:31.109001  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:31.109006  229016 round_trippers.go:580]     Content-Length: 291
	I1030 23:26:31.109029  229016 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d25ead-69ff-4f03-b32f-13c215a6d708","resourceVersion":"412","creationTimestamp":"2023-10-30T23:25:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1030 23:26:31.109118  229016 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-370491" context rescaled to 1 replicas
	I1030 23:26:31.109145  229016 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1030 23:26:31.111821  229016 out.go:177] * Verifying Kubernetes components...
	I1030 23:26:31.113232  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:26:31.131202  229016 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:26:31.131421  229016 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:26:31.131651  229016 node_ready.go:35] waiting up to 6m0s for node "multinode-370491-m02" to be "Ready" ...
	I1030 23:26:31.131785  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:31.131798  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:31.131805  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:31.131812  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:31.134321  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:31.134341  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:31.134350  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:31 GMT
	I1030 23:26:31.134358  229016 round_trippers.go:580]     Audit-Id: 8dc4f016-560d-413e-8fc5-ed5c2fc7a68f
	I1030 23:26:31.134367  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:31.134375  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:31.134384  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:31.134398  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:31.134409  229016 round_trippers.go:580]     Content-Length: 3530
	I1030 23:26:31.134475  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"469","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1030 23:26:31.134740  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:31.134752  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:31.134758  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:31.134764  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:31.136912  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:31.136930  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:31.136960  229016 round_trippers.go:580]     Audit-Id: bd792dc6-b962-4f3c-a9b1-900c23c4eb60
	I1030 23:26:31.136969  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:31.136977  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:31.136983  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:31.136989  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:31.136997  229016 round_trippers.go:580]     Content-Length: 3530
	I1030 23:26:31.137002  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:31 GMT
	I1030 23:26:31.137074  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"469","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1030 23:26:31.638477  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:31.638507  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:31.638516  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:31.638522  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:31.641739  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:31.641767  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:31.641778  229016 round_trippers.go:580]     Content-Length: 3530
	I1030 23:26:31.641787  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:31 GMT
	I1030 23:26:31.641796  229016 round_trippers.go:580]     Audit-Id: 88dfb538-f0fb-47f2-936c-5ec708467f90
	I1030 23:26:31.641805  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:31.641817  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:31.641826  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:31.641842  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:31.642118  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"469","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1030 23:26:32.137728  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:32.137752  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:32.137761  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:32.137771  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:32.140773  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:32.140807  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:32.140833  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:32.140843  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:32.140852  229016 round_trippers.go:580]     Content-Length: 3530
	I1030 23:26:32.140866  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:32 GMT
	I1030 23:26:32.140875  229016 round_trippers.go:580]     Audit-Id: bde3b808-aae4-446a-b32f-83b20e0e86af
	I1030 23:26:32.140895  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:32.140908  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:32.141007  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"469","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1030 23:26:32.638199  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:32.638224  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:32.638233  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:32.638239  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:32.641857  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:32.641884  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:32.641894  229016 round_trippers.go:580]     Audit-Id: e3c6f87b-f43a-4d50-a04e-a8e71facdd84
	I1030 23:26:32.641902  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:32.641910  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:32.641938  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:32.641950  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:32.641959  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:32.641969  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:32 GMT
	I1030 23:26:32.642236  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:33.138063  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:33.138085  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:33.138093  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:33.138099  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:33.141236  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:33.141265  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:33.141275  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:33.141284  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:33.141292  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:33.141300  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:33 GMT
	I1030 23:26:33.141307  229016 round_trippers.go:580]     Audit-Id: 791dffbe-08df-491d-a9c8-c06b9ce53806
	I1030 23:26:33.141315  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:33.141324  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:33.141604  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:33.141875  229016 node_ready.go:58] node "multinode-370491-m02" has status "Ready":"False"
	I1030 23:26:33.638202  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:33.638228  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:33.638236  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:33.638242  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:33.642471  229016 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:26:33.642500  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:33.642509  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:33.642518  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:33.642527  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:33.642536  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:33.642544  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:33.642552  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:33 GMT
	I1030 23:26:33.642560  229016 round_trippers.go:580]     Audit-Id: 746e3d3e-11bd-4f39-a9c1-11bf1a34944b
	I1030 23:26:33.642729  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:34.138325  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:34.138358  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:34.138372  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:34.138383  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:34.141897  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:34.141920  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:34.141926  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:34.141932  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:34.141940  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:34.141949  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:34.141957  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:34.141965  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:34 GMT
	I1030 23:26:34.141973  229016 round_trippers.go:580]     Audit-Id: 936e657e-991a-43b6-8521-150b815c42b6
	I1030 23:26:34.142067  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:34.638186  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:34.638208  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:34.638216  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:34.638230  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:34.641027  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:34.641056  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:34.641066  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:34.641075  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:34.641084  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:34 GMT
	I1030 23:26:34.641093  229016 round_trippers.go:580]     Audit-Id: 726fcd30-5b88-4c7f-bf5c-288a2d8c996a
	I1030 23:26:34.641101  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:34.641112  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:34.641121  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:34.641247  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:35.137731  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:35.137761  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:35.137773  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:35.137787  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:35.140861  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:35.140890  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:35.140902  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:35.140912  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:35.140921  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:35.140930  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:35 GMT
	I1030 23:26:35.140953  229016 round_trippers.go:580]     Audit-Id: 6f437323-e7b4-4355-b888-fd7faf1f79bd
	I1030 23:26:35.140967  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:35.140979  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:35.141055  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:35.638231  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:35.638258  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:35.638271  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:35.638282  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:35.641810  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:35.641839  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:35.641851  229016 round_trippers.go:580]     Audit-Id: 8e09de95-ce16-4060-b937-8ebca882c84c
	I1030 23:26:35.641859  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:35.641866  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:35.641874  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:35.641882  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:35.641894  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:35.641907  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:35 GMT
	I1030 23:26:35.642030  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:35.642373  229016 node_ready.go:58] node "multinode-370491-m02" has status "Ready":"False"
	I1030 23:26:36.137511  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:36.137535  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:36.137544  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:36.137550  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:36.146797  229016 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1030 23:26:36.146820  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:36.146828  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:36.146833  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:36.146855  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:36.146860  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:36.146865  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:36.146870  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:36 GMT
	I1030 23:26:36.146888  229016 round_trippers.go:580]     Audit-Id: 5a13e435-b6d7-4e2a-a811-c8763ac2e0f2
	I1030 23:26:36.147127  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:36.638220  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:36.638245  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:36.638254  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:36.638260  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:36.641078  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:36.641104  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:36.641124  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:36.641132  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:36 GMT
	I1030 23:26:36.641147  229016 round_trippers.go:580]     Audit-Id: 7039e8dd-5a31-4d9b-b786-31680a7bb983
	I1030 23:26:36.641159  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:36.641171  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:36.641181  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:36.641190  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:36.641284  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:37.137877  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:37.137906  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:37.137915  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:37.137921  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:37.140615  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:37.140636  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:37.140644  229016 round_trippers.go:580]     Audit-Id: cc7fb4d5-7ae2-46ff-ba21-a9a16a7e00b9
	I1030 23:26:37.140649  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:37.140654  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:37.140659  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:37.140664  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:37.140670  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:37.140675  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:37 GMT
	I1030 23:26:37.140751  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:37.638206  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:37.638233  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:37.638247  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:37.638256  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:37.641141  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:37.641176  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:37.641188  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:37.641203  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:37 GMT
	I1030 23:26:37.641212  229016 round_trippers.go:580]     Audit-Id: cae64971-5c4c-4866-ac41-e1199473b4c0
	I1030 23:26:37.641224  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:37.641234  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:37.641241  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:37.641251  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:37.641371  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:38.138201  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:38.138224  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.138233  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.138239  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.140933  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:38.140969  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.140977  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.140983  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.140988  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.140994  229016 round_trippers.go:580]     Content-Length: 3639
	I1030 23:26:38.141000  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.141005  229016 round_trippers.go:580]     Audit-Id: 2fcd7abf-b9d9-44f4-a11a-ae3c65bcab3f
	I1030 23:26:38.141024  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.141080  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"476","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1030 23:26:38.141304  229016 node_ready.go:58] node "multinode-370491-m02" has status "Ready":"False"
	I1030 23:26:38.637577  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:38.637599  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.637607  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.637614  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.639898  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:38.639937  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.639945  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.639952  229016 round_trippers.go:580]     Content-Length: 3725
	I1030 23:26:38.639957  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.639962  229016 round_trippers.go:580]     Audit-Id: f2030294-7e30-4d11-900c-534d41e9f4d0
	I1030 23:26:38.639967  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.639973  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.639980  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.640082  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"497","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I1030 23:26:38.640412  229016 node_ready.go:49] node "multinode-370491-m02" has status "Ready":"True"
	I1030 23:26:38.640437  229016 node_ready.go:38] duration metric: took 7.508768451s waiting for node "multinode-370491-m02" to be "Ready" ...
	I1030 23:26:38.640450  229016 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:26:38.640514  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:26:38.640521  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.640528  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.640534  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.643767  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:38.643786  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.643796  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.643805  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.643814  229016 round_trippers.go:580]     Audit-Id: 688b82f8-1da3-4078-bc58-3a4c430a63eb
	I1030 23:26:38.643823  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.643831  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.643840  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.644685  229016 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"407","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67362 chars]
	I1030 23:26:38.646755  229016 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.646822  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:26:38.646832  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.646840  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.646846  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.648726  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:26:38.648745  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.648755  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.648764  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.648773  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.648790  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.648799  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.648807  229016 round_trippers.go:580]     Audit-Id: 1bc3d2e6-82aa-4993-bce3-8e112b1bfd4a
	I1030 23:26:38.649145  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"407","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1030 23:26:38.649737  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:26:38.649755  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.649763  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.649768  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.651707  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:26:38.651728  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.651737  229016 round_trippers.go:580]     Audit-Id: dea93c96-5bb2-41dc-a920-363c4b8a6c7e
	I1030 23:26:38.651749  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.651758  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.651769  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.651777  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.651788  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.651918  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"418","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1030 23:26:38.652268  229016 pod_ready.go:92] pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace has status "Ready":"True"
	I1030 23:26:38.652286  229016 pod_ready.go:81] duration metric: took 5.508073ms waiting for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.652298  229016 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.652358  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:26:38.652369  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.652380  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.652393  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.654050  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:26:38.654067  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.654074  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.654079  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.654084  229016 round_trippers.go:580]     Audit-Id: 338f7ab1-e090-4404-aa4a-73ed3c00835d
	I1030 23:26:38.654091  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.654100  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.654108  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.654237  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"413","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1030 23:26:38.654560  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:26:38.654572  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.654579  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.654585  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.656293  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:26:38.656307  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.656314  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.656323  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.656332  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.656351  229016 round_trippers.go:580]     Audit-Id: f9f5e7ff-cd53-4825-9666-d9a477f8455c
	I1030 23:26:38.656356  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.656365  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.656756  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"418","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1030 23:26:38.657048  229016 pod_ready.go:92] pod "etcd-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:26:38.657062  229016 pod_ready.go:81] duration metric: took 4.757448ms waiting for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.657073  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.657116  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:26:38.657124  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.657131  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.657138  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.659870  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:38.659882  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.659887  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.659893  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.659898  229016 round_trippers.go:580]     Audit-Id: ebd79015-eff6-4393-a9c4-6467d5429ed2
	I1030 23:26:38.659902  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.659907  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.659917  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.660035  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-370491","namespace":"kube-system","uid":"d1874c7c-46ee-42eb-a395-c0d0138b3422","resourceVersion":"414","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.231:8443","kubernetes.io/config.hash":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.mirror":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.seen":"2023-10-30T23:25:35.493664410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1030 23:26:38.660453  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:26:38.660470  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.660481  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.660490  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.662101  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:26:38.662121  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.662130  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.662138  229016 round_trippers.go:580]     Audit-Id: aabb1165-8cab-4476-bb3d-95f28f6103fd
	I1030 23:26:38.662143  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.662148  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.662156  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.662162  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.662312  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"418","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1030 23:26:38.662645  229016 pod_ready.go:92] pod "kube-apiserver-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:26:38.662659  229016 pod_ready.go:81] duration metric: took 5.579508ms waiting for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.662669  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.662726  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-370491
	I1030 23:26:38.662736  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.662747  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.662757  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.664751  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:26:38.664767  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.664775  229016 round_trippers.go:580]     Audit-Id: c1505192-5b8e-4344-b69f-233e25dc2b30
	I1030 23:26:38.664782  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.664790  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.664799  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.664812  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.664824  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.664976  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-370491","namespace":"kube-system","uid":"4da6c57f-cec4-498b-a390-3fa2f8619a0b","resourceVersion":"415","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.mirror":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.seen":"2023-10-30T23:25:35.493665415Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1030 23:26:38.665414  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:26:38.665428  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.665438  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.665448  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.667026  229016 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:26:38.667041  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.667047  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.667053  229016 round_trippers.go:580]     Audit-Id: 0bfa96cd-e597-4e3a-9fa1-ca4389fdcdfa
	I1030 23:26:38.667057  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.667062  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.667068  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.667073  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.667191  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"418","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1030 23:26:38.667453  229016 pod_ready.go:92] pod "kube-controller-manager-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:26:38.667467  229016 pod_ready.go:81] duration metric: took 4.790364ms waiting for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.667478  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:38.837870  229016 request.go:629] Waited for 170.330151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:26:38.837939  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:26:38.837944  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:38.837951  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:38.837957  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:38.840793  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:38.840816  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:38.840825  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:38 GMT
	I1030 23:26:38.840833  229016 round_trippers.go:580]     Audit-Id: 5ef10342-87cb-4e1e-8321-1d783e5a19be
	I1030 23:26:38.840843  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:38.840852  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:38.840860  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:38.840878  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:38.841048  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g9wzd","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bffc44c-9d7f-4d1c-82e7-f249c53bf452","resourceVersion":"485","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1030 23:26:39.037902  229016 request.go:629] Waited for 196.396204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:39.037966  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:26:39.037982  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:39.037990  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:39.037997  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:39.041848  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:39.041878  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:39.041888  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:39 GMT
	I1030 23:26:39.041896  229016 round_trippers.go:580]     Audit-Id: 097228e8-5032-4c64-8016-834e631efbcf
	I1030 23:26:39.041905  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:39.041912  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:39.041919  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:39.041943  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:39.041953  229016 round_trippers.go:580]     Content-Length: 3725
	I1030 23:26:39.042054  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"497","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I1030 23:26:39.042319  229016 pod_ready.go:92] pod "kube-proxy-g9wzd" in "kube-system" namespace has status "Ready":"True"
	I1030 23:26:39.042336  229016 pod_ready.go:81] duration metric: took 374.849674ms waiting for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:39.042352  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:39.237777  229016 request.go:629] Waited for 195.315474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:26:39.237865  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:26:39.237870  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:39.237880  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:39.237895  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:39.240837  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:39.240860  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:39.240871  229016 round_trippers.go:580]     Audit-Id: 3721e1f5-ff12-4ce9-8f8b-1db9b521d3b9
	I1030 23:26:39.240879  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:39.240886  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:39.240894  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:39.240905  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:39.240913  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:39 GMT
	I1030 23:26:39.241515  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xbsl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1","resourceVersion":"377","creationTimestamp":"2023-10-30T23:25:47Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1030 23:26:39.438423  229016 request.go:629] Waited for 196.419923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:26:39.438513  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:26:39.438522  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:39.438533  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:39.438544  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:39.441653  229016 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:26:39.441675  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:39.441681  229016 round_trippers.go:580]     Audit-Id: c78a6b84-6544-4e7a-803d-555fbd0608bb
	I1030 23:26:39.441687  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:39.441692  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:39.441697  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:39.441702  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:39.441707  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:39 GMT
	I1030 23:26:39.441966  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"418","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1030 23:26:39.442318  229016 pod_ready.go:92] pod "kube-proxy-xbsl5" in "kube-system" namespace has status "Ready":"True"
	I1030 23:26:39.442335  229016 pod_ready.go:81] duration metric: took 399.975851ms waiting for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:39.442349  229016 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:39.637726  229016 request.go:629] Waited for 195.299617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:26:39.637813  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:26:39.637818  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:39.637826  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:39.637832  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:39.643067  229016 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 23:26:39.643087  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:39.643094  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:39.643100  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:39 GMT
	I1030 23:26:39.643105  229016 round_trippers.go:580]     Audit-Id: 64734d42-27ee-4540-b690-19602696ff8f
	I1030 23:26:39.643110  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:39.643115  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:39.643121  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:39.644089  229016 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-370491","namespace":"kube-system","uid":"b71476bb-1843-4ff9-8639-40ae73b72c8b","resourceVersion":"379","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.mirror":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.seen":"2023-10-30T23:25:35.493666103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1030 23:26:39.837927  229016 request.go:629] Waited for 193.349846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:26:39.838020  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:26:39.838030  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:39.838038  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:39.838044  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:39.840814  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:39.840841  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:39.840852  229016 round_trippers.go:580]     Audit-Id: 2ae369ab-dc82-4f2c-b5f6-8fac5ed804cd
	I1030 23:26:39.840861  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:39.840870  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:39.840880  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:39.840890  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:39.840898  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:39 GMT
	I1030 23:26:39.841251  229016 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"418","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5906 chars]
	I1030 23:26:39.841601  229016 pod_ready.go:92] pod "kube-scheduler-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:26:39.841618  229016 pod_ready.go:81] duration metric: took 399.261709ms waiting for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:26:39.841628  229016 pod_ready.go:38] duration metric: took 1.201160951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:26:39.841641  229016 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 23:26:39.841690  229016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:26:39.857506  229016 system_svc.go:56] duration metric: took 15.856672ms WaitForService to wait for kubelet.
	I1030 23:26:39.857535  229016 kubeadm.go:581] duration metric: took 8.748364865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1030 23:26:39.857556  229016 node_conditions.go:102] verifying NodePressure condition ...
	I1030 23:26:40.037967  229016 request.go:629] Waited for 180.340023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I1030 23:26:40.038098  229016 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I1030 23:26:40.038109  229016 round_trippers.go:469] Request Headers:
	I1030 23:26:40.038120  229016 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:26:40.038130  229016 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:26:40.040853  229016 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:26:40.040881  229016 round_trippers.go:577] Response Headers:
	I1030 23:26:40.040897  229016 round_trippers.go:580]     Audit-Id: c23fbf4a-a389-4bf8-8481-30cf0c56a174
	I1030 23:26:40.040906  229016 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:26:40.040914  229016 round_trippers.go:580]     Content-Type: application/json
	I1030 23:26:40.040922  229016 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:26:40.040931  229016 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:26:40.040951  229016 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:26:40 GMT
	I1030 23:26:40.041370  229016 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"418","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 9652 chars]
	I1030 23:26:40.041845  229016 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:26:40.041864  229016 node_conditions.go:123] node cpu capacity is 2
	I1030 23:26:40.041874  229016 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:26:40.041878  229016 node_conditions.go:123] node cpu capacity is 2
	I1030 23:26:40.041882  229016 node_conditions.go:105] duration metric: took 184.321095ms to run NodePressure ...
	I1030 23:26:40.041901  229016 start.go:228] waiting for startup goroutines ...
	I1030 23:26:40.041944  229016 start.go:242] writing updated cluster config ...
	I1030 23:26:40.042217  229016 ssh_runner.go:195] Run: rm -f paused
	I1030 23:26:40.095649  229016 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1030 23:26:40.098617  229016 out.go:177] * Done! kubectl is now configured to use "multinode-370491" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-30 23:25:03 UTC, ends at Mon 2023-10-30 23:26:46 UTC. --
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.256960460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698708406256949420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6bfebf00-0171-49a7-8666-84004f156d0e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.257575310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f7f42ee0-7bf5-4f82-8820-21dc4b556692 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.257618992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f7f42ee0-7bf5-4f82-8820-21dc4b556692 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.257823187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a93c2a9848351fda56c95a723ef469ef1ac6a1ff899df0fc99ab104180c85eb0,PodSandboxId:0d2388050c37c9afcd769c46a12792b500bf29af26d7372f97bfd3b1691e5fe2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698708402725948076,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7649ab49,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97addc78703a23c1a54257961f9eec4d2ef10141a3a8130bc72f57c8a5f09044,PodSandboxId:be362a11925df7a5a3c6faca3d6e3a2332b5970761475853046d25e17655d402,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698708354001770774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,},Annotations:map[string]string{io.kubernetes.container.hash: 11ac28ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733d6801d38633f8edb457fbb80cad5e845cdf4b696060dc7a2d800766607706,PodSandboxId:a3660ccefba4eb9a4dbd7f9896962f7ee5ec5f2a8d9c73aa1024384669615be2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698708353777530607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6c5f08fe079ea61ba077b83705e2bf6a0addfe0016a538786f54ecd3026fe1,PodSandboxId:ba233023e51e1cd728bf397c82b6b14bf2bad544d3e500cddefe8a2c7bb89970,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698708351310342662,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a79ceb52-48df-4240-9edc-05c81bf58f73,},Annotations:map[string]string{io.kubernetes.container.hash: 393bde1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ba6d52cc922a6ea54326dcd8a3f1cdd128599b4ff53ce57c1f71de0977f373,PodSandboxId:54dee3ed6b0b1247d9652ae4adfa4ef4e7fd7a25b63fcc8f4f0d47f64423c9f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698708349143596487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c
3ad0e1,},Annotations:map[string]string{io.kubernetes.container.hash: b2372445,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1477e09f26d4f3d07b322c6a75719e2af574ae3f951ab1ebea4f35dc577ff93e,PodSandboxId:fe81c12faee48d063cb49d075ce8a9cb397711561d6ce93c65eafcfe036dcbe1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698708328650704597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd16f1ddb981cd2509d3db88fcdf1ec90dad60ebc418231dadf485f2d86e2498,PodSandboxId:5fe8fc7d6e383471850930ab82d6bf4bd4e8e948ac3e1c18cb6e7afb4e7dee51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698708328351984455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,},Annotations:map[string]string{io.kubernetes.container.h
ash: a0e21061,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b901e332ebdd5cdcfea333c45301b3f30b049bfdc2be8098c0bbcbf5bc19d008,PodSandboxId:4c4760370495b53067096447e1b66f917ee81decc8ea048153592fc9e5181e93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698708328239414622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724b0a6b7a4a53994e7cc49beec2a61445fcd4c11b7aaf7be3c3aacedbe2a47b,PodSandboxId:d788127d94bb12b337fae941c7720c0fd9f7c95ca881c67a9dc8fef37b02d55f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698708327985379527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,},Annotations:map[string]string{io.kubernetes
.container.hash: 4e859895,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7f42ee0-7bf5-4f82-8820-21dc4b556692 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.298316115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=86ce6af1-f517-40e0-b8e7-53a98d416cef name=/runtime.v1.RuntimeService/Version
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.298375020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=86ce6af1-f517-40e0-b8e7-53a98d416cef name=/runtime.v1.RuntimeService/Version
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.299852763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cdddf642-11dc-49df-8a10-65522b8e6386 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.300667531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698708406300644868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cdddf642-11dc-49df-8a10-65522b8e6386 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.301303331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e992aa4-20d6-410b-9d9b-9ab15a5b80d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.301386068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e992aa4-20d6-410b-9d9b-9ab15a5b80d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.301584283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a93c2a9848351fda56c95a723ef469ef1ac6a1ff899df0fc99ab104180c85eb0,PodSandboxId:0d2388050c37c9afcd769c46a12792b500bf29af26d7372f97bfd3b1691e5fe2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698708402725948076,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7649ab49,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97addc78703a23c1a54257961f9eec4d2ef10141a3a8130bc72f57c8a5f09044,PodSandboxId:be362a11925df7a5a3c6faca3d6e3a2332b5970761475853046d25e17655d402,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698708354001770774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,},Annotations:map[string]string{io.kubernetes.container.hash: 11ac28ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733d6801d38633f8edb457fbb80cad5e845cdf4b696060dc7a2d800766607706,PodSandboxId:a3660ccefba4eb9a4dbd7f9896962f7ee5ec5f2a8d9c73aa1024384669615be2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698708353777530607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6c5f08fe079ea61ba077b83705e2bf6a0addfe0016a538786f54ecd3026fe1,PodSandboxId:ba233023e51e1cd728bf397c82b6b14bf2bad544d3e500cddefe8a2c7bb89970,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698708351310342662,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a79ceb52-48df-4240-9edc-05c81bf58f73,},Annotations:map[string]string{io.kubernetes.container.hash: 393bde1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ba6d52cc922a6ea54326dcd8a3f1cdd128599b4ff53ce57c1f71de0977f373,PodSandboxId:54dee3ed6b0b1247d9652ae4adfa4ef4e7fd7a25b63fcc8f4f0d47f64423c9f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698708349143596487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c
3ad0e1,},Annotations:map[string]string{io.kubernetes.container.hash: b2372445,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1477e09f26d4f3d07b322c6a75719e2af574ae3f951ab1ebea4f35dc577ff93e,PodSandboxId:fe81c12faee48d063cb49d075ce8a9cb397711561d6ce93c65eafcfe036dcbe1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698708328650704597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd16f1ddb981cd2509d3db88fcdf1ec90dad60ebc418231dadf485f2d86e2498,PodSandboxId:5fe8fc7d6e383471850930ab82d6bf4bd4e8e948ac3e1c18cb6e7afb4e7dee51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698708328351984455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,},Annotations:map[string]string{io.kubernetes.container.h
ash: a0e21061,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b901e332ebdd5cdcfea333c45301b3f30b049bfdc2be8098c0bbcbf5bc19d008,PodSandboxId:4c4760370495b53067096447e1b66f917ee81decc8ea048153592fc9e5181e93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698708328239414622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724b0a6b7a4a53994e7cc49beec2a61445fcd4c11b7aaf7be3c3aacedbe2a47b,PodSandboxId:d788127d94bb12b337fae941c7720c0fd9f7c95ca881c67a9dc8fef37b02d55f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698708327985379527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,},Annotations:map[string]string{io.kubernetes
.container.hash: 4e859895,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e992aa4-20d6-410b-9d9b-9ab15a5b80d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.343322968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f449baa3-772f-4e0f-bd8c-9c17c8029623 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.343382315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f449baa3-772f-4e0f-bd8c-9c17c8029623 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.344734411Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5714e11f-c57c-43be-aca5-3c89257102cc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.345122960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698708406345111049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5714e11f-c57c-43be-aca5-3c89257102cc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.345742923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4883ab1e-4919-46c2-8e12-49cd2a4bf89b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.345826477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4883ab1e-4919-46c2-8e12-49cd2a4bf89b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.346027583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a93c2a9848351fda56c95a723ef469ef1ac6a1ff899df0fc99ab104180c85eb0,PodSandboxId:0d2388050c37c9afcd769c46a12792b500bf29af26d7372f97bfd3b1691e5fe2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698708402725948076,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7649ab49,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97addc78703a23c1a54257961f9eec4d2ef10141a3a8130bc72f57c8a5f09044,PodSandboxId:be362a11925df7a5a3c6faca3d6e3a2332b5970761475853046d25e17655d402,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698708354001770774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,},Annotations:map[string]string{io.kubernetes.container.hash: 11ac28ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733d6801d38633f8edb457fbb80cad5e845cdf4b696060dc7a2d800766607706,PodSandboxId:a3660ccefba4eb9a4dbd7f9896962f7ee5ec5f2a8d9c73aa1024384669615be2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698708353777530607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6c5f08fe079ea61ba077b83705e2bf6a0addfe0016a538786f54ecd3026fe1,PodSandboxId:ba233023e51e1cd728bf397c82b6b14bf2bad544d3e500cddefe8a2c7bb89970,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698708351310342662,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a79ceb52-48df-4240-9edc-05c81bf58f73,},Annotations:map[string]string{io.kubernetes.container.hash: 393bde1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ba6d52cc922a6ea54326dcd8a3f1cdd128599b4ff53ce57c1f71de0977f373,PodSandboxId:54dee3ed6b0b1247d9652ae4adfa4ef4e7fd7a25b63fcc8f4f0d47f64423c9f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698708349143596487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c
3ad0e1,},Annotations:map[string]string{io.kubernetes.container.hash: b2372445,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1477e09f26d4f3d07b322c6a75719e2af574ae3f951ab1ebea4f35dc577ff93e,PodSandboxId:fe81c12faee48d063cb49d075ce8a9cb397711561d6ce93c65eafcfe036dcbe1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698708328650704597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd16f1ddb981cd2509d3db88fcdf1ec90dad60ebc418231dadf485f2d86e2498,PodSandboxId:5fe8fc7d6e383471850930ab82d6bf4bd4e8e948ac3e1c18cb6e7afb4e7dee51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698708328351984455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,},Annotations:map[string]string{io.kubernetes.container.h
ash: a0e21061,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b901e332ebdd5cdcfea333c45301b3f30b049bfdc2be8098c0bbcbf5bc19d008,PodSandboxId:4c4760370495b53067096447e1b66f917ee81decc8ea048153592fc9e5181e93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698708328239414622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724b0a6b7a4a53994e7cc49beec2a61445fcd4c11b7aaf7be3c3aacedbe2a47b,PodSandboxId:d788127d94bb12b337fae941c7720c0fd9f7c95ca881c67a9dc8fef37b02d55f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698708327985379527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,},Annotations:map[string]string{io.kubernetes
.container.hash: 4e859895,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4883ab1e-4919-46c2-8e12-49cd2a4bf89b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.384893320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d7580c25-d76e-4931-893c-d78e2cd1b613 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.384973376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d7580c25-d76e-4931-893c-d78e2cd1b613 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.386308215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=703d7978-29ad-4545-a221-41e29574c431 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.386745947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698708406386732959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=703d7978-29ad-4545-a221-41e29574c431 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.387403182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aa7bffcb-be80-4400-b8ca-19d00747dbaa name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.387477345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aa7bffcb-be80-4400-b8ca-19d00747dbaa name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:26:46 multinode-370491 crio[717]: time="2023-10-30 23:26:46.387696028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a93c2a9848351fda56c95a723ef469ef1ac6a1ff899df0fc99ab104180c85eb0,PodSandboxId:0d2388050c37c9afcd769c46a12792b500bf29af26d7372f97bfd3b1691e5fe2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698708402725948076,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7649ab49,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97addc78703a23c1a54257961f9eec4d2ef10141a3a8130bc72f57c8a5f09044,PodSandboxId:be362a11925df7a5a3c6faca3d6e3a2332b5970761475853046d25e17655d402,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698708354001770774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,},Annotations:map[string]string{io.kubernetes.container.hash: 11ac28ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733d6801d38633f8edb457fbb80cad5e845cdf4b696060dc7a2d800766607706,PodSandboxId:a3660ccefba4eb9a4dbd7f9896962f7ee5ec5f2a8d9c73aa1024384669615be2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698708353777530607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6c5f08fe079ea61ba077b83705e2bf6a0addfe0016a538786f54ecd3026fe1,PodSandboxId:ba233023e51e1cd728bf397c82b6b14bf2bad544d3e500cddefe8a2c7bb89970,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698708351310342662,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a79ceb52-48df-4240-9edc-05c81bf58f73,},Annotations:map[string]string{io.kubernetes.container.hash: 393bde1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ba6d52cc922a6ea54326dcd8a3f1cdd128599b4ff53ce57c1f71de0977f373,PodSandboxId:54dee3ed6b0b1247d9652ae4adfa4ef4e7fd7a25b63fcc8f4f0d47f64423c9f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698708349143596487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c
3ad0e1,},Annotations:map[string]string{io.kubernetes.container.hash: b2372445,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1477e09f26d4f3d07b322c6a75719e2af574ae3f951ab1ebea4f35dc577ff93e,PodSandboxId:fe81c12faee48d063cb49d075ce8a9cb397711561d6ce93c65eafcfe036dcbe1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698708328650704597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd16f1ddb981cd2509d3db88fcdf1ec90dad60ebc418231dadf485f2d86e2498,PodSandboxId:5fe8fc7d6e383471850930ab82d6bf4bd4e8e948ac3e1c18cb6e7afb4e7dee51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698708328351984455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,},Annotations:map[string]string{io.kubernetes.container.h
ash: a0e21061,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b901e332ebdd5cdcfea333c45301b3f30b049bfdc2be8098c0bbcbf5bc19d008,PodSandboxId:4c4760370495b53067096447e1b66f917ee81decc8ea048153592fc9e5181e93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698708328239414622,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724b0a6b7a4a53994e7cc49beec2a61445fcd4c11b7aaf7be3c3aacedbe2a47b,PodSandboxId:d788127d94bb12b337fae941c7720c0fd9f7c95ca881c67a9dc8fef37b02d55f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698708327985379527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,},Annotations:map[string]string{io.kubernetes
.container.hash: 4e859895,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aa7bffcb-be80-4400-b8ca-19d00747dbaa name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a93c2a9848351       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   0d2388050c37c       busybox-5bc68d56bd-7hhs5
	97addc78703a2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      52 seconds ago       Running             coredns                   0                   be362a11925df       coredns-5dd5756b68-6pgvt
	733d6801d3863       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      52 seconds ago       Running             storage-provisioner       0                   a3660ccefba4e       storage-provisioner
	3e6c5f08fe079       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      55 seconds ago       Running             kindnet-cni               0                   ba233023e51e1       kindnet-m9f5k
	a7ba6d52cc922       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      57 seconds ago       Running             kube-proxy                0                   54dee3ed6b0b1       kube-proxy-xbsl5
	1477e09f26d4f       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      About a minute ago   Running             kube-scheduler            0                   fe81c12faee48       kube-scheduler-multinode-370491
	fd16f1ddb981c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   5fe8fc7d6e383       etcd-multinode-370491
	b901e332ebdd5       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      About a minute ago   Running             kube-controller-manager   0                   4c4760370495b       kube-controller-manager-multinode-370491
	724b0a6b7a4a5       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      About a minute ago   Running             kube-apiserver            0                   d788127d94bb1       kube-apiserver-multinode-370491
	
	* 
	* ==> coredns [97addc78703a23c1a54257961f9eec4d2ef10141a3a8130bc72f57c8a5f09044] <==
	* [INFO] 10.244.1.2:40499 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00016272s
	[INFO] 10.244.0.3:41570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177692s
	[INFO] 10.244.0.3:57515 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173368s
	[INFO] 10.244.0.3:52138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081548s
	[INFO] 10.244.0.3:53873 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071569s
	[INFO] 10.244.0.3:33686 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001252934s
	[INFO] 10.244.0.3:46872 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144563s
	[INFO] 10.244.0.3:44114 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092512s
	[INFO] 10.244.0.3:48742 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063832s
	[INFO] 10.244.1.2:42537 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142508s
	[INFO] 10.244.1.2:42382 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00020936s
	[INFO] 10.244.1.2:34176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000188326s
	[INFO] 10.244.1.2:57608 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076972s
	[INFO] 10.244.0.3:56770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009872s
	[INFO] 10.244.0.3:60706 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088946s
	[INFO] 10.244.0.3:53740 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064209s
	[INFO] 10.244.0.3:59462 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083203s
	[INFO] 10.244.1.2:49690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223201s
	[INFO] 10.244.1.2:49943 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000219311s
	[INFO] 10.244.1.2:48624 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000256295s
	[INFO] 10.244.1.2:56112 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136863s
	[INFO] 10.244.0.3:50405 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175742s
	[INFO] 10.244.0.3:36063 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010916s
	[INFO] 10.244.0.3:42941 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068917s
	[INFO] 10.244.0.3:53956 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000047395s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-370491
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370491
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=multinode-370491
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_30T23_25_36_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:25:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370491
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Oct 2023 23:26:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Oct 2023 23:25:52 +0000   Mon, 30 Oct 2023 23:25:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Oct 2023 23:25:52 +0000   Mon, 30 Oct 2023 23:25:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Oct 2023 23:25:52 +0000   Mon, 30 Oct 2023 23:25:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Oct 2023 23:25:52 +0000   Mon, 30 Oct 2023 23:25:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    multinode-370491
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 636fd736a51348bda33817c729308277
	  System UUID:                636fd736-a513-48bd-a338-17c729308277
	  Boot ID:                    ca7ffdc0-d329-4826-a30b-3807f59ae2f7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-7hhs5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-6pgvt                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-multinode-370491                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kindnet-m9f5k                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      59s
	  kube-system                 kube-apiserver-multinode-370491             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-multinode-370491    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-xbsl5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-multinode-370491             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 57s   kube-proxy       
	  Normal  Starting                 71s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s   kubelet          Node multinode-370491 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s   kubelet          Node multinode-370491 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s   kubelet          Node multinode-370491 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           59s   node-controller  Node multinode-370491 event: Registered Node multinode-370491 in Controller
	  Normal  NodeReady                54s   kubelet          Node multinode-370491 status is now: NodeReady
	
	
	Name:               multinode-370491-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370491-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:26:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370491-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Oct 2023 23:26:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Oct 2023 23:26:38 +0000   Mon, 30 Oct 2023 23:26:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Oct 2023 23:26:38 +0000   Mon, 30 Oct 2023 23:26:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Oct 2023 23:26:38 +0000   Mon, 30 Oct 2023 23:26:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Oct 2023 23:26:38 +0000   Mon, 30 Oct 2023 23:26:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.85
	  Hostname:    multinode-370491-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 09caef3963124bd193408d450ad01051
	  System UUID:                09caef39-6312-4bd1-9340-8d450ad01051
	  Boot ID:                    35b2b885-d6bb-4e06-ab65-0931f0b0b4da
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4t8fk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-76g2q               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16s
	  kube-system                 kube-proxy-g9wzd            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  16s (x5 over 18s)  kubelet          Node multinode-370491-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x5 over 18s)  kubelet          Node multinode-370491-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s (x5 over 18s)  kubelet          Node multinode-370491-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node multinode-370491-m02 event: Registered Node multinode-370491-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-370491-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Oct30 23:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067773] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.336703] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct30 23:25] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151543] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.073550] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.375541] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.102660] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.140523] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.098380] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.206747] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.282487] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +9.248582] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[ +19.561081] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [fd16f1ddb981cd2509d3db88fcdf1ec90dad60ebc418231dadf485f2d86e2498] <==
	* {"level":"info","ts":"2023-10-30T23:25:30.630099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-30T23:25:30.630116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 received MsgPreVoteResp from 6a82bbfd8eee2a80 at term 1"}
	{"level":"info","ts":"2023-10-30T23:25:30.630128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became candidate at term 2"}
	{"level":"info","ts":"2023-10-30T23:25:30.630133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 received MsgVoteResp from 6a82bbfd8eee2a80 at term 2"}
	{"level":"info","ts":"2023-10-30T23:25:30.630145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became leader at term 2"}
	{"level":"info","ts":"2023-10-30T23:25:30.630152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a82bbfd8eee2a80 elected leader 6a82bbfd8eee2a80 at term 2"}
	{"level":"info","ts":"2023-10-30T23:25:30.631607Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-30T23:25:30.632761Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6a82bbfd8eee2a80","local-member-attributes":"{Name:multinode-370491 ClientURLs:[https://192.168.39.231:2379]}","request-path":"/0/members/6a82bbfd8eee2a80/attributes","cluster-id":"1a20717615099fdd","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-30T23:25:30.633324Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-30T23:25:30.633611Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a20717615099fdd","local-member-id":"6a82bbfd8eee2a80","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-30T23:25:30.63373Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-30T23:25:30.633775Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-30T23:25:30.633816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-30T23:25:30.63487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-30T23:25:30.642887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.231:2379"}
	{"level":"info","ts":"2023-10-30T23:25:30.644349Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-30T23:25:30.644393Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-10-30T23:26:29.09272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.709774ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3062601140967740774 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-nfcsr\" mod_revision:448 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-nfcsr\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-nfcsr\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-30T23:26:29.093115Z","caller":"traceutil/trace.go:171","msg":"trace[1156074119] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:466; }","duration":"307.587004ms","start":"2023-10-30T23:26:28.785506Z","end":"2023-10-30T23:26:29.093093Z","steps":["trace[1156074119] 'read index received'  (duration: 55.819839ms)","trace[1156074119] 'applied index is now lower than readState.Index'  (duration: 251.765551ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-30T23:26:29.093388Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.887085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-nfcsr\" ","response":"range_response_count:1 size:1337"}
	{"level":"info","ts":"2023-10-30T23:26:29.093577Z","caller":"traceutil/trace.go:171","msg":"trace[2120329211] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-nfcsr; range_end:; response_count:1; response_revision:449; }","duration":"308.076421ms","start":"2023-10-30T23:26:28.785485Z","end":"2023-10-30T23:26:29.093561Z","steps":["trace[2120329211] 'agreement among raft nodes before linearized reading'  (duration: 307.822565ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-30T23:26:29.093706Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-30T23:26:28.785473Z","time spent":"308.215767ms","remote":"127.0.0.1:34350","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":1359,"request content":"key:\"/registry/certificatesigningrequests/csr-nfcsr\" "}
	{"level":"info","ts":"2023-10-30T23:26:29.093883Z","caller":"traceutil/trace.go:171","msg":"trace[456094755] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"309.85766ms","start":"2023-10-30T23:26:28.78401Z","end":"2023-10-30T23:26:29.093868Z","steps":["trace[456094755] 'process raft request'  (duration: 57.358957ms)","trace[456094755] 'compare'  (duration: 249.38388ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-30T23:26:29.093966Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-30T23:26:28.783987Z","time spent":"309.938784ms","remote":"127.0.0.1:34350","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1322,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-nfcsr\" mod_revision:448 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-nfcsr\" value_size:1268 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-nfcsr\" > >"}
	{"level":"info","ts":"2023-10-30T23:26:29.394104Z","caller":"traceutil/trace.go:171","msg":"trace[1345057919] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"292.573111ms","start":"2023-10-30T23:26:29.101516Z","end":"2023-10-30T23:26:29.394089Z","steps":["trace[1345057919] 'process raft request'  (duration: 218.459842ms)","trace[1345057919] 'compare'  (duration: 73.753582ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  23:26:46 up 1 min,  0 users,  load average: 0.43, 0.22, 0.08
	Linux multinode-370491 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [3e6c5f08fe079ea61ba077b83705e2bf6a0addfe0016a538786f54ecd3026fe1] <==
	* I1030 23:25:52.173434       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1030 23:25:52.173517       1 main.go:107] hostIP = 192.168.39.231
	podIP = 192.168.39.231
	I1030 23:25:52.173845       1 main.go:116] setting mtu 1500 for CNI 
	I1030 23:25:52.173886       1 main.go:146] kindnetd IP family: "ipv4"
	I1030 23:25:52.173909       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1030 23:25:52.762459       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:25:52.762536       1 main.go:227] handling current node
	I1030 23:26:02.771871       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:26:02.771921       1 main.go:227] handling current node
	I1030 23:26:12.784325       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:26:12.784376       1 main.go:227] handling current node
	I1030 23:26:22.790829       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:26:22.790888       1 main.go:227] handling current node
	I1030 23:26:32.805731       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:26:32.805858       1 main.go:227] handling current node
	I1030 23:26:32.805911       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I1030 23:26:32.805938       1 main.go:250] Node multinode-370491-m02 has CIDR [10.244.1.0/24] 
	I1030 23:26:32.806323       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.85 Flags: [] Table: 0} 
	I1030 23:26:42.820188       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:26:42.820310       1 main.go:227] handling current node
	I1030 23:26:42.820329       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I1030 23:26:42.820336       1 main.go:250] Node multinode-370491-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [724b0a6b7a4a53994e7cc49beec2a61445fcd4c11b7aaf7be3c3aacedbe2a47b] <==
	* I1030 23:25:32.281668       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1030 23:25:32.281850       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1030 23:25:32.282766       1 aggregator.go:166] initial CRD sync complete...
	I1030 23:25:32.282801       1 autoregister_controller.go:141] Starting autoregister controller
	I1030 23:25:32.282807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1030 23:25:32.282812       1 cache.go:39] Caches are synced for autoregister controller
	I1030 23:25:32.283126       1 controller.go:624] quota admission added evaluator for: namespaces
	I1030 23:25:32.288575       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1030 23:25:32.313751       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1030 23:25:32.325986       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1030 23:25:33.093696       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1030 23:25:33.098948       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1030 23:25:33.099078       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1030 23:25:33.738182       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1030 23:25:33.783897       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1030 23:25:33.909321       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1030 23:25:33.921910       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.231]
	I1030 23:25:33.923137       1 controller.go:624] quota admission added evaluator for: endpoints
	I1030 23:25:33.927650       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1030 23:25:34.181853       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1030 23:25:35.368440       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1030 23:25:35.388111       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1030 23:25:35.417508       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1030 23:25:47.791012       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1030 23:25:47.936951       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [b901e332ebdd5cdcfea333c45301b3f30b049bfdc2be8098c0bbcbf5bc19d008] <==
	* I1030 23:25:54.754408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.81µs"
	I1030 23:25:54.786045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.254282ms"
	I1030 23:25:54.786332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.073µs"
	I1030 23:25:57.424579       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1030 23:25:57.424897       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-6pgvt" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-6pgvt"
	I1030 23:25:57.424945       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1030 23:26:30.264465       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370491-m02\" does not exist"
	I1030 23:26:30.285403       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-370491-m02" podCIDRs=["10.244.1.0/24"]
	I1030 23:26:30.287096       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-76g2q"
	I1030 23:26:30.287956       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-g9wzd"
	I1030 23:26:32.430836       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-370491-m02"
	I1030 23:26:32.431317       1 event.go:307] "Event occurred" object="multinode-370491-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-370491-m02 event: Registered Node multinode-370491-m02 in Controller"
	I1030 23:26:38.534662       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-370491-m02"
	I1030 23:26:40.855291       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1030 23:26:40.879840       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4t8fk"
	I1030 23:26:40.897004       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-7hhs5"
	I1030 23:26:40.925847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.458735ms"
	I1030 23:26:40.933618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.66593ms"
	I1030 23:26:40.933960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.463µs"
	I1030 23:26:40.947160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="171.503µs"
	I1030 23:26:42.442087       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-4t8fk" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-4t8fk"
	I1030 23:26:42.897935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.084917ms"
	I1030 23:26:42.898337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="155.152µs"
	I1030 23:26:42.929112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.982331ms"
	I1030 23:26:42.929837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.929µs"
	
	* 
	* ==> kube-proxy [a7ba6d52cc922a6ea54326dcd8a3f1cdd128599b4ff53ce57c1f71de0977f373] <==
	* I1030 23:25:49.291939       1 server_others.go:69] "Using iptables proxy"
	I1030 23:25:49.302377       1 node.go:141] Successfully retrieved node IP: 192.168.39.231
	I1030 23:25:49.354381       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1030 23:25:49.354449       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 23:25:49.357187       1 server_others.go:152] "Using iptables Proxier"
	I1030 23:25:49.357355       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1030 23:25:49.357594       1 server.go:846] "Version info" version="v1.28.3"
	I1030 23:25:49.357638       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 23:25:49.358396       1 config.go:188] "Starting service config controller"
	I1030 23:25:49.358461       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1030 23:25:49.358492       1 config.go:97] "Starting endpoint slice config controller"
	I1030 23:25:49.358507       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1030 23:25:49.360018       1 config.go:315] "Starting node config controller"
	I1030 23:25:49.360058       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1030 23:25:49.458879       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1030 23:25:49.458957       1 shared_informer.go:318] Caches are synced for service config
	I1030 23:25:49.460314       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1477e09f26d4f3d07b322c6a75719e2af574ae3f951ab1ebea4f35dc577ff93e] <==
	* W1030 23:25:32.254058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1030 23:25:32.254069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1030 23:25:32.256328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1030 23:25:32.256418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1030 23:25:32.259691       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 23:25:32.259809       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 23:25:33.070370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1030 23:25:33.070477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1030 23:25:33.166982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1030 23:25:33.167071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1030 23:25:33.202607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1030 23:25:33.202657       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1030 23:25:33.222985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1030 23:25:33.223164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1030 23:25:33.253515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1030 23:25:33.253586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1030 23:25:33.277997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1030 23:25:33.278082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1030 23:25:33.282585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1030 23:25:33.282734       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1030 23:25:33.439482       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1030 23:25:33.439569       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 23:25:33.459118       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1030 23:25:33.459169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1030 23:25:35.434669       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-30 23:25:03 UTC, ends at Mon 2023-10-30 23:26:47 UTC. --
	Oct 30 23:25:48 multinode-370491 kubelet[1267]: I1030 23:25:48.094871    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb41a78a-bf80-4546-b7d6-423a8c3ad0e1-kube-proxy\") pod \"kube-proxy-xbsl5\" (UID: \"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1\") " pod="kube-system/kube-proxy-xbsl5"
	Oct 30 23:25:48 multinode-370491 kubelet[1267]: I1030 23:25:48.094945    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt92\" (UniqueName: \"kubernetes.io/projected/eb41a78a-bf80-4546-b7d6-423a8c3ad0e1-kube-api-access-cbt92\") pod \"kube-proxy-xbsl5\" (UID: \"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1\") " pod="kube-system/kube-proxy-xbsl5"
	Oct 30 23:25:48 multinode-370491 kubelet[1267]: I1030 23:25:48.094970    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a79ceb52-48df-4240-9edc-05c81bf58f73-lib-modules\") pod \"kindnet-m9f5k\" (UID: \"a79ceb52-48df-4240-9edc-05c81bf58f73\") " pod="kube-system/kindnet-m9f5k"
	Oct 30 23:25:48 multinode-370491 kubelet[1267]: I1030 23:25:48.094997    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54cn\" (UniqueName: \"kubernetes.io/projected/a79ceb52-48df-4240-9edc-05c81bf58f73-kube-api-access-x54cn\") pod \"kindnet-m9f5k\" (UID: \"a79ceb52-48df-4240-9edc-05c81bf58f73\") " pod="kube-system/kindnet-m9f5k"
	Oct 30 23:25:48 multinode-370491 kubelet[1267]: I1030 23:25:48.095016    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb41a78a-bf80-4546-b7d6-423a8c3ad0e1-lib-modules\") pod \"kube-proxy-xbsl5\" (UID: \"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1\") " pod="kube-system/kube-proxy-xbsl5"
	Oct 30 23:25:48 multinode-370491 kubelet[1267]: I1030 23:25:48.095036    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a79ceb52-48df-4240-9edc-05c81bf58f73-cni-cfg\") pod \"kindnet-m9f5k\" (UID: \"a79ceb52-48df-4240-9edc-05c81bf58f73\") " pod="kube-system/kindnet-m9f5k"
	Oct 30 23:25:48 multinode-370491 kubelet[1267]: I1030 23:25:48.095057    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a79ceb52-48df-4240-9edc-05c81bf58f73-xtables-lock\") pod \"kindnet-m9f5k\" (UID: \"a79ceb52-48df-4240-9edc-05c81bf58f73\") " pod="kube-system/kindnet-m9f5k"
	Oct 30 23:25:48 multinode-370491 kubelet[1267]: I1030 23:25:48.095074    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb41a78a-bf80-4546-b7d6-423a8c3ad0e1-xtables-lock\") pod \"kube-proxy-xbsl5\" (UID: \"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1\") " pod="kube-system/kube-proxy-xbsl5"
	Oct 30 23:25:49 multinode-370491 kubelet[1267]: I1030 23:25:49.936847    1267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xbsl5" podStartSLOduration=2.936810779 podCreationTimestamp="2023-10-30 23:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-30 23:25:49.72141723 +0000 UTC m=+14.382547686" watchObservedRunningTime="2023-10-30 23:25:49.936810779 +0000 UTC m=+14.597941232"
	Oct 30 23:25:52 multinode-370491 kubelet[1267]: I1030 23:25:52.918156    1267 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 30 23:25:52 multinode-370491 kubelet[1267]: I1030 23:25:52.957707    1267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-m9f5k" podStartSLOduration=5.957668149 podCreationTimestamp="2023-10-30 23:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-30 23:25:52.734743135 +0000 UTC m=+17.395873590" watchObservedRunningTime="2023-10-30 23:25:52.957668149 +0000 UTC m=+17.618798586"
	Oct 30 23:25:52 multinode-370491 kubelet[1267]: I1030 23:25:52.958079    1267 topology_manager.go:215] "Topology Admit Handler" podUID="d854be1d-ae4e-420a-9853-253f0258915c" podNamespace="kube-system" podName="coredns-5dd5756b68-6pgvt"
	Oct 30 23:25:52 multinode-370491 kubelet[1267]: I1030 23:25:52.967196    1267 topology_manager.go:215] "Topology Admit Handler" podUID="6f2bbacd-e138-4f82-961e-76f1daf88ccd" podNamespace="kube-system" podName="storage-provisioner"
	Oct 30 23:25:53 multinode-370491 kubelet[1267]: I1030 23:25:53.032750    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6f2bbacd-e138-4f82-961e-76f1daf88ccd-tmp\") pod \"storage-provisioner\" (UID: \"6f2bbacd-e138-4f82-961e-76f1daf88ccd\") " pod="kube-system/storage-provisioner"
	Oct 30 23:25:53 multinode-370491 kubelet[1267]: I1030 23:25:53.032795    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd89x\" (UniqueName: \"kubernetes.io/projected/6f2bbacd-e138-4f82-961e-76f1daf88ccd-kube-api-access-nd89x\") pod \"storage-provisioner\" (UID: \"6f2bbacd-e138-4f82-961e-76f1daf88ccd\") " pod="kube-system/storage-provisioner"
	Oct 30 23:25:53 multinode-370491 kubelet[1267]: I1030 23:25:53.032821    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d854be1d-ae4e-420a-9853-253f0258915c-config-volume\") pod \"coredns-5dd5756b68-6pgvt\" (UID: \"d854be1d-ae4e-420a-9853-253f0258915c\") " pod="kube-system/coredns-5dd5756b68-6pgvt"
	Oct 30 23:25:53 multinode-370491 kubelet[1267]: I1030 23:25:53.032843    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5jp\" (UniqueName: \"kubernetes.io/projected/d854be1d-ae4e-420a-9853-253f0258915c-kube-api-access-8j5jp\") pod \"coredns-5dd5756b68-6pgvt\" (UID: \"d854be1d-ae4e-420a-9853-253f0258915c\") " pod="kube-system/coredns-5dd5756b68-6pgvt"
	Oct 30 23:25:54 multinode-370491 kubelet[1267]: I1030 23:25:54.752810    1267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6pgvt" podStartSLOduration=6.752768286 podCreationTimestamp="2023-10-30 23:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-30 23:25:54.751890607 +0000 UTC m=+19.413021060" watchObservedRunningTime="2023-10-30 23:25:54.752768286 +0000 UTC m=+19.413898741"
	Oct 30 23:25:55 multinode-370491 kubelet[1267]: I1030 23:25:55.617776    1267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.617740477 podCreationTimestamp="2023-10-30 23:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-30 23:25:54.795136047 +0000 UTC m=+19.456266502" watchObservedRunningTime="2023-10-30 23:25:55.617740477 +0000 UTC m=+20.278870932"
	Oct 30 23:26:35 multinode-370491 kubelet[1267]: E1030 23:26:35.667881    1267 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 30 23:26:35 multinode-370491 kubelet[1267]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 23:26:35 multinode-370491 kubelet[1267]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 23:26:35 multinode-370491 kubelet[1267]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 23:26:40 multinode-370491 kubelet[1267]: I1030 23:26:40.914889    1267 topology_manager.go:215] "Topology Admit Handler" podUID="2c28c851-1dbe-434e-a041-4bf33b87bd7b" podNamespace="default" podName="busybox-5bc68d56bd-7hhs5"
	Oct 30 23:26:41 multinode-370491 kubelet[1267]: I1030 23:26:41.020394    1267 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwz4t\" (UniqueName: \"kubernetes.io/projected/2c28c851-1dbe-434e-a041-4bf33b87bd7b-kube-api-access-qwz4t\") pod \"busybox-5bc68d56bd-7hhs5\" (UID: \"2c28c851-1dbe-434e-a041-4bf33b87bd7b\") " pod="default/busybox-5bc68d56bd-7hhs5"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-370491 -n multinode-370491
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-370491 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (689.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-370491
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-370491
E1030 23:29:14.584117  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:29:30.632751  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-370491: exit status 82 (2m1.765292923s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-370491"  ...
	* Stopping node "multinode-370491"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-370491" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370491 --wait=true -v=8 --alsologtostderr
E1030 23:30:53.677897  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:32:08.184809  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:34:14.583962  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:34:30.632902  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:35:37.630189  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:37:08.184963  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:38:31.231378  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:39:14.583467  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:39:30.630927  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-370491 --wait=true -v=8 --alsologtostderr: (9m24.440410769s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-370491
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-370491 -n multinode-370491
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-370491 logs -n 25: (1.597325612s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-370491 ssh -n                                                                 | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-370491 cp multinode-370491-m02:/home/docker/cp-test.txt                       | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile715226696/001/cp-test_multinode-370491-m02.txt          |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n                                                                 | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-370491 cp multinode-370491-m02:/home/docker/cp-test.txt                       | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491:/home/docker/cp-test_multinode-370491-m02_multinode-370491.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n                                                                 | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n multinode-370491 sudo cat                                       | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | /home/docker/cp-test_multinode-370491-m02_multinode-370491.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-370491 cp multinode-370491-m02:/home/docker/cp-test.txt                       | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m03:/home/docker/cp-test_multinode-370491-m02_multinode-370491-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n                                                                 | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n multinode-370491-m03 sudo cat                                   | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | /home/docker/cp-test_multinode-370491-m02_multinode-370491-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-370491 cp testdata/cp-test.txt                                                | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n                                                                 | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-370491 cp multinode-370491-m03:/home/docker/cp-test.txt                       | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile715226696/001/cp-test_multinode-370491-m03.txt          |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n                                                                 | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-370491 cp multinode-370491-m03:/home/docker/cp-test.txt                       | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491:/home/docker/cp-test_multinode-370491-m03_multinode-370491.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n                                                                 | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n multinode-370491 sudo cat                                       | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | /home/docker/cp-test_multinode-370491-m03_multinode-370491.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-370491 cp multinode-370491-m03:/home/docker/cp-test.txt                       | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m02:/home/docker/cp-test_multinode-370491-m03_multinode-370491-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n                                                                 | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | multinode-370491-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-370491 ssh -n multinode-370491-m02 sudo cat                                   | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	|         | /home/docker/cp-test_multinode-370491-m03_multinode-370491-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-370491 node stop m03                                                          | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:27 UTC |
	| node    | multinode-370491 node start                                                             | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:27 UTC | 30 Oct 23 23:28 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |                |                     |                     |
	| node    | list -p multinode-370491                                                                | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:28 UTC |                     |
	| stop    | -p multinode-370491                                                                     | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:28 UTC |                     |
	| start   | -p multinode-370491                                                                     | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:30 UTC | 30 Oct 23 23:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-370491                                                                | multinode-370491 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/30 23:30:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 23:30:10.269776  232335 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:30:10.269962  232335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:30:10.269979  232335 out.go:309] Setting ErrFile to fd 2...
	I1030 23:30:10.269988  232335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:30:10.270212  232335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:30:10.270792  232335 out.go:303] Setting JSON to false
	I1030 23:30:10.271769  232335 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25962,"bootTime":1698682648,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:30:10.271837  232335 start.go:138] virtualization: kvm guest
	I1030 23:30:10.274502  232335 out.go:177] * [multinode-370491] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:30:10.276239  232335 out.go:177]   - MINIKUBE_LOCATION=17527
	I1030 23:30:10.276188  232335 notify.go:220] Checking for updates...
	I1030 23:30:10.277707  232335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:30:10.279207  232335 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:30:10.280654  232335 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:30:10.282034  232335 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 23:30:10.283333  232335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 23:30:10.285063  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:30:10.285167  232335 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:30:10.285584  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:30:10.285645  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:30:10.300181  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I1030 23:30:10.300621  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:30:10.301284  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:30:10.301310  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:30:10.301632  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:30:10.301810  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:30:10.337643  232335 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 23:30:10.338960  232335 start.go:298] selected driver: kvm2
	I1030 23:30:10.338977  232335 start.go:902] validating driver "kvm2" against &{Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:30:10.339094  232335 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 23:30:10.339396  232335 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:30:10.339499  232335 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 23:30:10.354091  232335 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1030 23:30:10.354764  232335 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1030 23:30:10.354824  232335 cni.go:84] Creating CNI manager for ""
	I1030 23:30:10.354837  232335 cni.go:136] 3 nodes found, recommending kindnet
	I1030 23:30:10.354848  232335 start_flags.go:323] config:
	{Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:30:10.355075  232335 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:30:10.356983  232335 out.go:177] * Starting control plane node multinode-370491 in cluster multinode-370491
	I1030 23:30:10.358234  232335 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:30:10.358282  232335 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1030 23:30:10.358294  232335 cache.go:56] Caching tarball of preloaded images
	I1030 23:30:10.358404  232335 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 23:30:10.358421  232335 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1030 23:30:10.358560  232335 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:30:10.358789  232335 start.go:365] acquiring machines lock for multinode-370491: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:30:10.358844  232335 start.go:369] acquired machines lock for "multinode-370491" in 34.135µs
	I1030 23:30:10.358868  232335 start.go:96] Skipping create...Using existing machine configuration
	I1030 23:30:10.358877  232335 fix.go:54] fixHost starting: 
	I1030 23:30:10.359156  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:30:10.359199  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:30:10.373176  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36727
	I1030 23:30:10.373652  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:30:10.374112  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:30:10.374136  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:30:10.374486  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:30:10.374640  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:30:10.374761  232335 main.go:141] libmachine: (multinode-370491) Calling .GetState
	I1030 23:30:10.376310  232335 fix.go:102] recreateIfNeeded on multinode-370491: state=Running err=<nil>
	W1030 23:30:10.376330  232335 fix.go:128] unexpected machine state, will restart: <nil>
	I1030 23:30:10.378575  232335 out.go:177] * Updating the running kvm2 "multinode-370491" VM ...
	I1030 23:30:10.379798  232335 machine.go:88] provisioning docker machine ...
	I1030 23:30:10.379817  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:30:10.380044  232335 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:30:10.380228  232335 buildroot.go:166] provisioning hostname "multinode-370491"
	I1030 23:30:10.380247  232335 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:30:10.380366  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:30:10.382594  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:30:10.383061  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:30:10.383096  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:30:10.383263  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:30:10.383441  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:30:10.383583  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:30:10.383703  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:30:10.383853  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:30:10.384366  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:30:10.384387  232335 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-370491 && echo "multinode-370491" | sudo tee /etc/hostname
	I1030 23:30:28.937241  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:30:35.017379  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:30:38.089245  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:30:44.169276  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:30:47.241197  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:30:53.321239  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:30:56.393274  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:02.473259  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:05.545240  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:11.625263  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:14.697224  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:20.777219  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:23.849201  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:29.929236  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:33.001202  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:39.081243  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:42.153307  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:48.233228  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:51.305413  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:31:57.385237  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:00.457217  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:06.537447  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:09.609256  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:15.689262  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:18.761229  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:24.841324  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:27.913260  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:33.993263  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:37.065212  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:43.145409  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:46.217201  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:52.297199  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:32:55.369232  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:01.449275  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:04.521200  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:10.601295  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:13.673175  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:19.753229  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:22.825225  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:28.905219  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:31.977248  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:38.057279  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:41.129243  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:47.209269  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:50.281221  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:56.361433  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:33:59.433250  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:05.513201  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:08.585235  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:14.665297  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:17.737231  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:23.817220  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:26.889192  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:32.969303  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:36.041197  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:42.121256  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:45.193251  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:51.273252  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:34:54.349160  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:35:00.425200  232335 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.231:22: connect: no route to host
	I1030 23:35:03.427943  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:35:03.428013  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:03.430136  232335 machine.go:91] provisioned docker machine in 4m53.050318581s
	I1030 23:35:03.430188  232335 fix.go:56] fixHost completed within 4m53.071311532s
	I1030 23:35:03.430194  232335 start.go:83] releasing machines lock for "multinode-370491", held for 4m53.071336793s
	W1030 23:35:03.430246  232335 start.go:691] error starting host: provision: host is not running
	W1030 23:35:03.430571  232335 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1030 23:35:03.430581  232335 start.go:706] Will try again in 5 seconds ...
	I1030 23:35:08.433359  232335 start.go:365] acquiring machines lock for multinode-370491: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:35:08.433558  232335 start.go:369] acquired machines lock for "multinode-370491" in 120.681µs
	I1030 23:35:08.433640  232335 start.go:96] Skipping create...Using existing machine configuration
	I1030 23:35:08.433653  232335 fix.go:54] fixHost starting: 
	I1030 23:35:08.434353  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:35:08.434397  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:35:08.449524  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45517
	I1030 23:35:08.450063  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:35:08.450575  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:35:08.450602  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:35:08.451047  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:35:08.451200  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:35:08.451383  232335 main.go:141] libmachine: (multinode-370491) Calling .GetState
	I1030 23:35:08.452980  232335 fix.go:102] recreateIfNeeded on multinode-370491: state=Stopped err=<nil>
	I1030 23:35:08.453000  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	W1030 23:35:08.453168  232335 fix.go:128] unexpected machine state, will restart: <nil>
	I1030 23:35:08.455240  232335 out.go:177] * Restarting existing kvm2 VM for "multinode-370491" ...
	I1030 23:35:08.456689  232335 main.go:141] libmachine: (multinode-370491) Calling .Start
	I1030 23:35:08.456880  232335 main.go:141] libmachine: (multinode-370491) Ensuring networks are active...
	I1030 23:35:08.457754  232335 main.go:141] libmachine: (multinode-370491) Ensuring network default is active
	I1030 23:35:08.458112  232335 main.go:141] libmachine: (multinode-370491) Ensuring network mk-multinode-370491 is active
	I1030 23:35:08.458547  232335 main.go:141] libmachine: (multinode-370491) Getting domain xml...
	I1030 23:35:08.459292  232335 main.go:141] libmachine: (multinode-370491) Creating domain...
	I1030 23:35:09.698940  232335 main.go:141] libmachine: (multinode-370491) Waiting to get IP...
	I1030 23:35:09.699791  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:09.700256  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:09.700359  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:09.700245  233147 retry.go:31] will retry after 191.722736ms: waiting for machine to come up
	I1030 23:35:09.893858  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:09.894692  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:09.894728  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:09.894630  233147 retry.go:31] will retry after 370.30554ms: waiting for machine to come up
	I1030 23:35:10.266021  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:10.266426  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:10.266461  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:10.266362  233147 retry.go:31] will retry after 308.139719ms: waiting for machine to come up
	I1030 23:35:10.575750  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:10.576288  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:10.576319  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:10.576221  233147 retry.go:31] will retry after 560.28519ms: waiting for machine to come up
	I1030 23:35:11.137976  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:11.138465  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:11.138499  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:11.138430  233147 retry.go:31] will retry after 720.265522ms: waiting for machine to come up
	I1030 23:35:11.860421  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:11.860985  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:11.861020  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:11.860907  233147 retry.go:31] will retry after 614.030557ms: waiting for machine to come up
	I1030 23:35:12.477044  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:12.477493  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:12.477526  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:12.477454  233147 retry.go:31] will retry after 743.914178ms: waiting for machine to come up
	I1030 23:35:13.223389  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:13.223940  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:13.223969  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:13.223878  233147 retry.go:31] will retry after 1.099009559s: waiting for machine to come up
	I1030 23:35:14.324466  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:14.324896  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:14.324961  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:14.324854  233147 retry.go:31] will retry after 1.565801234s: waiting for machine to come up
	I1030 23:35:15.892671  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:15.893177  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:15.893208  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:15.893045  233147 retry.go:31] will retry after 1.63700567s: waiting for machine to come up
	I1030 23:35:17.531826  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:17.532288  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:17.532326  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:17.532219  233147 retry.go:31] will retry after 2.175894349s: waiting for machine to come up
	I1030 23:35:19.710169  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:19.710842  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:19.710873  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:19.710770  233147 retry.go:31] will retry after 2.679198623s: waiting for machine to come up
	I1030 23:35:22.393533  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:22.394003  232335 main.go:141] libmachine: (multinode-370491) DBG | unable to find current IP address of domain multinode-370491 in network mk-multinode-370491
	I1030 23:35:22.394034  232335 main.go:141] libmachine: (multinode-370491) DBG | I1030 23:35:22.393941  233147 retry.go:31] will retry after 4.099894748s: waiting for machine to come up
	I1030 23:35:26.497613  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.498199  232335 main.go:141] libmachine: (multinode-370491) Found IP for machine: 192.168.39.231
	I1030 23:35:26.498230  232335 main.go:141] libmachine: (multinode-370491) Reserving static IP address...
	I1030 23:35:26.498250  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has current primary IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.498807  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "multinode-370491", mac: "52:54:00:40:7c:a3", ip: "192.168.39.231"} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:26.498855  232335 main.go:141] libmachine: (multinode-370491) Reserved static IP address: 192.168.39.231
	I1030 23:35:26.498871  232335 main.go:141] libmachine: (multinode-370491) DBG | skip adding static IP to network mk-multinode-370491 - found existing host DHCP lease matching {name: "multinode-370491", mac: "52:54:00:40:7c:a3", ip: "192.168.39.231"}
	I1030 23:35:26.498891  232335 main.go:141] libmachine: (multinode-370491) DBG | Getting to WaitForSSH function...
	I1030 23:35:26.498936  232335 main.go:141] libmachine: (multinode-370491) Waiting for SSH to be available...
	I1030 23:35:26.501039  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.501343  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:26.501375  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.501498  232335 main.go:141] libmachine: (multinode-370491) DBG | Using SSH client type: external
	I1030 23:35:26.501528  232335 main.go:141] libmachine: (multinode-370491) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa (-rw-------)
	I1030 23:35:26.501555  232335 main.go:141] libmachine: (multinode-370491) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1030 23:35:26.501570  232335 main.go:141] libmachine: (multinode-370491) DBG | About to run SSH command:
	I1030 23:35:26.501578  232335 main.go:141] libmachine: (multinode-370491) DBG | exit 0
	I1030 23:35:26.592384  232335 main.go:141] libmachine: (multinode-370491) DBG | SSH cmd err, output: <nil>: 
	I1030 23:35:26.592836  232335 main.go:141] libmachine: (multinode-370491) Calling .GetConfigRaw
	I1030 23:35:26.593572  232335 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:35:26.596036  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.596377  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:26.596418  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.596826  232335 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:35:26.597113  232335 machine.go:88] provisioning docker machine ...
	I1030 23:35:26.597140  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:35:26.597366  232335 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:35:26.597561  232335 buildroot.go:166] provisioning hostname "multinode-370491"
	I1030 23:35:26.597582  232335 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:35:26.597711  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:26.599678  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.599997  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:26.600029  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.600146  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:35:26.600303  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:26.600474  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:26.600672  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:35:26.600840  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:35:26.601296  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:35:26.601312  232335 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-370491 && echo "multinode-370491" | sudo tee /etc/hostname
	I1030 23:35:26.737333  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370491
	
	I1030 23:35:26.737364  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:26.740313  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.740666  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:26.740727  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.740971  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:35:26.741186  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:26.741361  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:26.741538  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:35:26.741789  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:35:26.742146  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:35:26.742174  232335 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-370491' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-370491/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-370491' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 23:35:26.868528  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:35:26.868603  232335 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1030 23:35:26.868639  232335 buildroot.go:174] setting up certificates
	I1030 23:35:26.868650  232335 provision.go:83] configureAuth start
	I1030 23:35:26.868663  232335 main.go:141] libmachine: (multinode-370491) Calling .GetMachineName
	I1030 23:35:26.869009  232335 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:35:26.871696  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.872156  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:26.872187  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.872368  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:26.874350  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.874739  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:26.874775  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:26.874838  232335 provision.go:138] copyHostCerts
	I1030 23:35:26.874884  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:35:26.874944  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1030 23:35:26.874977  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:35:26.875043  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1030 23:35:26.875113  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:35:26.875134  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1030 23:35:26.875140  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:35:26.875164  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1030 23:35:26.875208  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:35:26.875226  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1030 23:35:26.875232  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:35:26.875251  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1030 23:35:26.875297  232335 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.multinode-370491 san=[192.168.39.231 192.168.39.231 localhost 127.0.0.1 minikube multinode-370491]
	I1030 23:35:27.044823  232335 provision.go:172] copyRemoteCerts
	I1030 23:35:27.044893  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 23:35:27.044921  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:27.047616  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.047968  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:27.048003  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.048125  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:35:27.048313  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:27.048487  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:35:27.048630  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:35:27.138274  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 23:35:27.138357  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1030 23:35:27.159738  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 23:35:27.159807  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1030 23:35:27.180851  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 23:35:27.180921  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 23:35:27.201789  232335 provision.go:86] duration metric: configureAuth took 333.122917ms
	I1030 23:35:27.201814  232335 buildroot.go:189] setting minikube options for container-runtime
	I1030 23:35:27.202092  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:35:27.202230  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:27.204883  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.205306  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:27.205341  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.205465  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:35:27.205672  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:27.205851  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:27.206010  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:35:27.206172  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:35:27.206506  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:35:27.206524  232335 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 23:35:27.509123  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 23:35:27.509149  232335 machine.go:91] provisioned docker machine in 912.018414ms
	I1030 23:35:27.509160  232335 start.go:300] post-start starting for "multinode-370491" (driver="kvm2")
	I1030 23:35:27.509170  232335 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 23:35:27.509216  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:35:27.509587  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 23:35:27.509628  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:27.512464  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.512890  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:27.512922  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.513055  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:35:27.513260  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:27.513455  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:35:27.513642  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:35:27.603861  232335 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 23:35:27.608201  232335 command_runner.go:130] > NAME=Buildroot
	I1030 23:35:27.608221  232335 command_runner.go:130] > VERSION=2021.02.12-1-gea8740b-dirty
	I1030 23:35:27.608226  232335 command_runner.go:130] > ID=buildroot
	I1030 23:35:27.608232  232335 command_runner.go:130] > VERSION_ID=2021.02.12
	I1030 23:35:27.608238  232335 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1030 23:35:27.608276  232335 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 23:35:27.608292  232335 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1030 23:35:27.608395  232335 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1030 23:35:27.608516  232335 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1030 23:35:27.608530  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /etc/ssl/certs/2160052.pem
	I1030 23:35:27.608651  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 23:35:27.618274  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:35:27.640257  232335 start.go:303] post-start completed in 131.080816ms
	I1030 23:35:27.640280  232335 fix.go:56] fixHost completed within 19.206628455s
	I1030 23:35:27.640303  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:27.642902  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.643298  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:27.643326  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.643495  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:35:27.643717  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:27.643886  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:27.644086  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:35:27.644243  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:35:27.644611  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1030 23:35:27.644624  232335 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1030 23:35:27.765678  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698708927.712914815
	
	I1030 23:35:27.765701  232335 fix.go:206] guest clock: 1698708927.712914815
	I1030 23:35:27.765708  232335 fix.go:219] Guest: 2023-10-30 23:35:27.712914815 +0000 UTC Remote: 2023-10-30 23:35:27.640284164 +0000 UTC m=+317.424310581 (delta=72.630651ms)
	I1030 23:35:27.765730  232335 fix.go:190] guest clock delta is within tolerance: 72.630651ms
	I1030 23:35:27.765736  232335 start.go:83] releasing machines lock for "multinode-370491", held for 19.332161104s
	I1030 23:35:27.765768  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:35:27.766058  232335 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:35:27.768644  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.768980  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:27.769012  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.769176  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:35:27.769761  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:35:27.769956  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:35:27.770042  232335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 23:35:27.770083  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:27.770243  232335 ssh_runner.go:195] Run: cat /version.json
	I1030 23:35:27.770283  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:35:27.772690  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.773008  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:27.773050  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.773153  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.773184  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:35:27.773369  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:27.773532  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:35:27.773569  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:27.773600  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:27.773725  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:35:27.773745  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:35:27.773899  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:35:27.774031  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:35:27.774162  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:35:27.865302  232335 command_runner.go:130] > {"iso_version": "v1.32.0-1698684775-17527", "kicbase_version": "v0.0.41-1698660445-17527", "minikube_version": "v1.32.0-beta.0", "commit": "4c1f451320d1a77675b9eefd8e846c23ac017af4"}
	I1030 23:35:27.865606  232335 ssh_runner.go:195] Run: systemctl --version
	I1030 23:35:27.893375  232335 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1030 23:35:27.893438  232335 command_runner.go:130] > systemd 247 (247)
	I1030 23:35:27.893464  232335 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1030 23:35:27.893532  232335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 23:35:28.033843  232335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1030 23:35:28.040859  232335 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1030 23:35:28.040950  232335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 23:35:28.041005  232335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 23:35:28.057044  232335 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1030 23:35:28.057088  232335 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1030 23:35:28.057099  232335 start.go:472] detecting cgroup driver to use...
	I1030 23:35:28.057155  232335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 23:35:28.070341  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 23:35:28.081498  232335 docker.go:198] disabling cri-docker service (if available) ...
	I1030 23:35:28.081560  232335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 23:35:28.094867  232335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 23:35:28.108071  232335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 23:35:28.226084  232335 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1030 23:35:28.226155  232335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 23:35:28.363094  232335 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1030 23:35:28.363133  232335 docker.go:214] disabling docker service ...
	I1030 23:35:28.363184  232335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 23:35:28.379421  232335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 23:35:28.390185  232335 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1030 23:35:28.391237  232335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 23:35:28.504524  232335 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1030 23:35:28.504607  232335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 23:35:28.613103  232335 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1030 23:35:28.613139  232335 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1030 23:35:28.613219  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 23:35:28.625525  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 23:35:28.642146  232335 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1030 23:35:28.642583  232335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1030 23:35:28.642642  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:35:28.651839  232335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 23:35:28.651912  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:35:28.660590  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:35:28.669273  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:35:28.677919  232335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 23:35:28.686920  232335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 23:35:28.694674  232335 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 23:35:28.694714  232335 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1030 23:35:28.694767  232335 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1030 23:35:28.706965  232335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 23:35:28.714940  232335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 23:35:28.829311  232335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 23:35:28.987609  232335 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 23:35:28.987692  232335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 23:35:28.992533  232335 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1030 23:35:28.992554  232335 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1030 23:35:28.992564  232335 command_runner.go:130] > Device: 16h/22d	Inode: 739         Links: 1
	I1030 23:35:28.992583  232335 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:35:28.992611  232335 command_runner.go:130] > Access: 2023-10-30 23:35:28.922527687 +0000
	I1030 23:35:28.992628  232335 command_runner.go:130] > Modify: 2023-10-30 23:35:28.922527687 +0000
	I1030 23:35:28.992637  232335 command_runner.go:130] > Change: 2023-10-30 23:35:28.922527687 +0000
	I1030 23:35:28.992648  232335 command_runner.go:130] >  Birth: -
	I1030 23:35:28.992684  232335 start.go:540] Will wait 60s for crictl version
	I1030 23:35:28.992760  232335 ssh_runner.go:195] Run: which crictl
	I1030 23:35:28.996715  232335 command_runner.go:130] > /usr/bin/crictl
	I1030 23:35:28.996987  232335 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 23:35:29.041726  232335 command_runner.go:130] > Version:  0.1.0
	I1030 23:35:29.041755  232335 command_runner.go:130] > RuntimeName:  cri-o
	I1030 23:35:29.041762  232335 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1030 23:35:29.041767  232335 command_runner.go:130] > RuntimeApiVersion:  v1
	I1030 23:35:29.041829  232335 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1030 23:35:29.041931  232335 ssh_runner.go:195] Run: crio --version
	I1030 23:35:29.083864  232335 command_runner.go:130] > crio version 1.24.1
	I1030 23:35:29.083888  232335 command_runner.go:130] > Version:          1.24.1
	I1030 23:35:29.083894  232335 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:35:29.083900  232335 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:35:29.083911  232335 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:35:29.083919  232335 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:35:29.083927  232335 command_runner.go:130] > Compiler:         gc
	I1030 23:35:29.083935  232335 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:35:29.083947  232335 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:35:29.083954  232335 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:35:29.083958  232335 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:35:29.083965  232335 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:35:29.084048  232335 ssh_runner.go:195] Run: crio --version
	I1030 23:35:29.130607  232335 command_runner.go:130] > crio version 1.24.1
	I1030 23:35:29.130627  232335 command_runner.go:130] > Version:          1.24.1
	I1030 23:35:29.130638  232335 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:35:29.130642  232335 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:35:29.130649  232335 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:35:29.130655  232335 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:35:29.130662  232335 command_runner.go:130] > Compiler:         gc
	I1030 23:35:29.130670  232335 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:35:29.130680  232335 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:35:29.130692  232335 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:35:29.130699  232335 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:35:29.130703  232335 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:35:29.134500  232335 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1030 23:35:29.135871  232335 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:35:29.138716  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:29.139085  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:35:29.139118  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:35:29.139270  232335 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 23:35:29.143176  232335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:35:29.156043  232335 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:35:29.156102  232335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 23:35:29.188811  232335 command_runner.go:130] > {
	I1030 23:35:29.188837  232335 command_runner.go:130] >   "images": [
	I1030 23:35:29.188841  232335 command_runner.go:130] >     {
	I1030 23:35:29.188849  232335 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1030 23:35:29.188854  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:29.188860  232335 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1030 23:35:29.188863  232335 command_runner.go:130] >       ],
	I1030 23:35:29.188867  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:29.188886  232335 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1030 23:35:29.188893  232335 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1030 23:35:29.188897  232335 command_runner.go:130] >       ],
	I1030 23:35:29.188901  232335 command_runner.go:130] >       "size": "750414",
	I1030 23:35:29.188905  232335 command_runner.go:130] >       "uid": {
	I1030 23:35:29.188917  232335 command_runner.go:130] >         "value": "65535"
	I1030 23:35:29.188932  232335 command_runner.go:130] >       },
	I1030 23:35:29.188957  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:29.188966  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:29.188973  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:29.188979  232335 command_runner.go:130] >     }
	I1030 23:35:29.188987  232335 command_runner.go:130] >   ]
	I1030 23:35:29.188996  232335 command_runner.go:130] > }
	I1030 23:35:29.190160  232335 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1030 23:35:29.190223  232335 ssh_runner.go:195] Run: which lz4
	I1030 23:35:29.193896  232335 command_runner.go:130] > /usr/bin/lz4
	I1030 23:35:29.193934  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1030 23:35:29.194020  232335 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1030 23:35:29.197946  232335 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 23:35:29.197981  232335 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1030 23:35:29.198005  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1030 23:35:31.039117  232335 crio.go:444] Took 1.845120 seconds to copy over tarball
	I1030 23:35:31.039221  232335 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1030 23:35:33.853407  232335 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.814146062s)
	I1030 23:35:33.853449  232335 crio.go:451] Took 2.814299 seconds to extract the tarball
	I1030 23:35:33.853464  232335 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1030 23:35:33.894586  232335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1030 23:35:33.939632  232335 command_runner.go:130] > {
	I1030 23:35:33.939658  232335 command_runner.go:130] >   "images": [
	I1030 23:35:33.939684  232335 command_runner.go:130] >     {
	I1030 23:35:33.939698  232335 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1030 23:35:33.939704  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.939713  232335 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1030 23:35:33.939719  232335 command_runner.go:130] >       ],
	I1030 23:35:33.939729  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.939756  232335 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1030 23:35:33.939772  232335 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1030 23:35:33.939779  232335 command_runner.go:130] >       ],
	I1030 23:35:33.939794  232335 command_runner.go:130] >       "size": "65258016",
	I1030 23:35:33.939804  232335 command_runner.go:130] >       "uid": null,
	I1030 23:35:33.939814  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.939825  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.939835  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.939844  232335 command_runner.go:130] >     },
	I1030 23:35:33.939852  232335 command_runner.go:130] >     {
	I1030 23:35:33.939864  232335 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1030 23:35:33.939874  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.939889  232335 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1030 23:35:33.939898  232335 command_runner.go:130] >       ],
	I1030 23:35:33.939908  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.939922  232335 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1030 23:35:33.939938  232335 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1030 23:35:33.939948  232335 command_runner.go:130] >       ],
	I1030 23:35:33.939965  232335 command_runner.go:130] >       "size": "31470524",
	I1030 23:35:33.939974  232335 command_runner.go:130] >       "uid": null,
	I1030 23:35:33.939980  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.939985  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.939991  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.939994  232335 command_runner.go:130] >     },
	I1030 23:35:33.940001  232335 command_runner.go:130] >     {
	I1030 23:35:33.940007  232335 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1030 23:35:33.940013  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.940019  232335 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1030 23:35:33.940024  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940029  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.940038  232335 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1030 23:35:33.940048  232335 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1030 23:35:33.940054  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940058  232335 command_runner.go:130] >       "size": "53621675",
	I1030 23:35:33.940064  232335 command_runner.go:130] >       "uid": null,
	I1030 23:35:33.940069  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.940075  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.940081  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.940085  232335 command_runner.go:130] >     },
	I1030 23:35:33.940088  232335 command_runner.go:130] >     {
	I1030 23:35:33.940097  232335 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1030 23:35:33.940104  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.940109  232335 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1030 23:35:33.940121  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940125  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.940132  232335 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1030 23:35:33.940138  232335 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1030 23:35:33.940201  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940215  232335 command_runner.go:130] >       "size": "295456551",
	I1030 23:35:33.940219  232335 command_runner.go:130] >       "uid": {
	I1030 23:35:33.940223  232335 command_runner.go:130] >         "value": "0"
	I1030 23:35:33.940226  232335 command_runner.go:130] >       },
	I1030 23:35:33.940232  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.940237  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.940246  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.940250  232335 command_runner.go:130] >     },
	I1030 23:35:33.940253  232335 command_runner.go:130] >     {
	I1030 23:35:33.940262  232335 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1030 23:35:33.940266  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.940272  232335 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1030 23:35:33.940278  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940283  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.940295  232335 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1030 23:35:33.940305  232335 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1030 23:35:33.940311  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940316  232335 command_runner.go:130] >       "size": "127165392",
	I1030 23:35:33.940322  232335 command_runner.go:130] >       "uid": {
	I1030 23:35:33.940327  232335 command_runner.go:130] >         "value": "0"
	I1030 23:35:33.940333  232335 command_runner.go:130] >       },
	I1030 23:35:33.940337  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.940341  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.940348  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.940356  232335 command_runner.go:130] >     },
	I1030 23:35:33.940362  232335 command_runner.go:130] >     {
	I1030 23:35:33.940368  232335 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1030 23:35:33.940375  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.940380  232335 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1030 23:35:33.940384  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940389  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.940401  232335 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1030 23:35:33.940417  232335 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1030 23:35:33.940426  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940433  232335 command_runner.go:130] >       "size": "123188534",
	I1030 23:35:33.940443  232335 command_runner.go:130] >       "uid": {
	I1030 23:35:33.940452  232335 command_runner.go:130] >         "value": "0"
	I1030 23:35:33.940461  232335 command_runner.go:130] >       },
	I1030 23:35:33.940469  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.940479  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.940489  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.940498  232335 command_runner.go:130] >     },
	I1030 23:35:33.940510  232335 command_runner.go:130] >     {
	I1030 23:35:33.940523  232335 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1030 23:35:33.940533  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.940545  232335 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1030 23:35:33.940554  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940565  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.940579  232335 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1030 23:35:33.940594  232335 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1030 23:35:33.940603  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940608  232335 command_runner.go:130] >       "size": "74691991",
	I1030 23:35:33.940615  232335 command_runner.go:130] >       "uid": null,
	I1030 23:35:33.940619  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.940625  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.940630  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.940636  232335 command_runner.go:130] >     },
	I1030 23:35:33.940640  232335 command_runner.go:130] >     {
	I1030 23:35:33.940652  232335 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1030 23:35:33.940662  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.940679  232335 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1030 23:35:33.940688  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940697  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.940778  232335 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1030 23:35:33.940797  232335 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1030 23:35:33.940803  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940809  232335 command_runner.go:130] >       "size": "61498678",
	I1030 23:35:33.940819  232335 command_runner.go:130] >       "uid": {
	I1030 23:35:33.940828  232335 command_runner.go:130] >         "value": "0"
	I1030 23:35:33.940841  232335 command_runner.go:130] >       },
	I1030 23:35:33.940852  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.940862  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.940872  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.940885  232335 command_runner.go:130] >     },
	I1030 23:35:33.940892  232335 command_runner.go:130] >     {
	I1030 23:35:33.940902  232335 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1030 23:35:33.940912  232335 command_runner.go:130] >       "repoTags": [
	I1030 23:35:33.940921  232335 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1030 23:35:33.940933  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940958  232335 command_runner.go:130] >       "repoDigests": [
	I1030 23:35:33.940969  232335 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1030 23:35:33.940984  232335 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1030 23:35:33.940991  232335 command_runner.go:130] >       ],
	I1030 23:35:33.940997  232335 command_runner.go:130] >       "size": "750414",
	I1030 23:35:33.941007  232335 command_runner.go:130] >       "uid": {
	I1030 23:35:33.941017  232335 command_runner.go:130] >         "value": "65535"
	I1030 23:35:33.941026  232335 command_runner.go:130] >       },
	I1030 23:35:33.941039  232335 command_runner.go:130] >       "username": "",
	I1030 23:35:33.941045  232335 command_runner.go:130] >       "spec": null,
	I1030 23:35:33.941052  232335 command_runner.go:130] >       "pinned": false
	I1030 23:35:33.941058  232335 command_runner.go:130] >     }
	I1030 23:35:33.941065  232335 command_runner.go:130] >   ]
	I1030 23:35:33.941070  232335 command_runner.go:130] > }
	I1030 23:35:33.941239  232335 crio.go:496] all images are preloaded for cri-o runtime.
	I1030 23:35:33.941261  232335 cache_images.go:84] Images are preloaded, skipping loading
	I1030 23:35:33.941333  232335 ssh_runner.go:195] Run: crio config
	I1030 23:35:33.996511  232335 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1030 23:35:33.996542  232335 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1030 23:35:33.996558  232335 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1030 23:35:33.996565  232335 command_runner.go:130] > #
	I1030 23:35:33.996577  232335 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1030 23:35:33.996590  232335 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1030 23:35:33.996600  232335 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1030 23:35:33.996626  232335 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1030 23:35:33.996638  232335 command_runner.go:130] > # reload'.
	I1030 23:35:33.996645  232335 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1030 23:35:33.996656  232335 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1030 23:35:33.996668  232335 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1030 23:35:33.996682  232335 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1030 23:35:33.996691  232335 command_runner.go:130] > [crio]
	I1030 23:35:33.996703  232335 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1030 23:35:33.996715  232335 command_runner.go:130] > # containers images, in this directory.
	I1030 23:35:33.996727  232335 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1030 23:35:33.996757  232335 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1030 23:35:33.996774  232335 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1030 23:35:33.996784  232335 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1030 23:35:33.996798  232335 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1030 23:35:33.996809  232335 command_runner.go:130] > storage_driver = "overlay"
	I1030 23:35:33.996820  232335 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1030 23:35:33.996835  232335 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1030 23:35:33.996846  232335 command_runner.go:130] > storage_option = [
	I1030 23:35:33.996858  232335 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1030 23:35:33.996865  232335 command_runner.go:130] > ]
	I1030 23:35:33.996880  232335 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1030 23:35:33.996893  232335 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1030 23:35:33.996905  232335 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1030 23:35:33.996918  232335 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1030 23:35:33.996932  232335 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1030 23:35:33.996959  232335 command_runner.go:130] > # always happen on a node reboot
	I1030 23:35:33.996971  232335 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1030 23:35:33.996983  232335 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1030 23:35:33.996999  232335 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1030 23:35:33.997015  232335 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1030 23:35:33.997027  232335 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1030 23:35:33.997043  232335 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1030 23:35:33.997060  232335 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1030 23:35:33.997071  232335 command_runner.go:130] > # internal_wipe = true
	I1030 23:35:33.997083  232335 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1030 23:35:33.997096  232335 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1030 23:35:33.997109  232335 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1030 23:35:33.997122  232335 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1030 23:35:33.997135  232335 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1030 23:35:33.997144  232335 command_runner.go:130] > [crio.api]
	I1030 23:35:33.997152  232335 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1030 23:35:33.997164  232335 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1030 23:35:33.997174  232335 command_runner.go:130] > # IP address on which the stream server will listen.
	I1030 23:35:33.997185  232335 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1030 23:35:33.997203  232335 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1030 23:35:33.997212  232335 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1030 23:35:33.997224  232335 command_runner.go:130] > # stream_port = "0"
	I1030 23:35:33.997234  232335 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1030 23:35:33.997272  232335 command_runner.go:130] > # stream_enable_tls = false
	I1030 23:35:33.997285  232335 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1030 23:35:33.997292  232335 command_runner.go:130] > # stream_idle_timeout = ""
	I1030 23:35:33.997302  232335 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1030 23:35:33.997318  232335 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1030 23:35:33.997325  232335 command_runner.go:130] > # minutes.
	I1030 23:35:33.997335  232335 command_runner.go:130] > # stream_tls_cert = ""
	I1030 23:35:33.997347  232335 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1030 23:35:33.997361  232335 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1030 23:35:33.997370  232335 command_runner.go:130] > # stream_tls_key = ""
	I1030 23:35:33.997380  232335 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1030 23:35:33.997394  232335 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1030 23:35:33.997403  232335 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1030 23:35:33.997410  232335 command_runner.go:130] > # stream_tls_ca = ""
	I1030 23:35:33.997427  232335 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:35:33.997439  232335 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1030 23:35:33.997461  232335 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:35:33.997473  232335 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1030 23:35:33.997509  232335 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1030 23:35:33.997521  232335 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1030 23:35:33.997529  232335 command_runner.go:130] > [crio.runtime]
	I1030 23:35:33.997543  232335 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1030 23:35:33.997556  232335 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1030 23:35:33.997566  232335 command_runner.go:130] > # "nofile=1024:2048"
	I1030 23:35:33.997581  232335 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1030 23:35:33.997592  232335 command_runner.go:130] > # default_ulimits = [
	I1030 23:35:33.997602  232335 command_runner.go:130] > # ]
	I1030 23:35:33.997617  232335 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1030 23:35:33.997627  232335 command_runner.go:130] > # no_pivot = false
	I1030 23:35:33.997637  232335 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1030 23:35:33.997651  232335 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1030 23:35:33.997664  232335 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1030 23:35:33.997677  232335 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1030 23:35:33.997690  232335 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1030 23:35:33.997710  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:35:33.997722  232335 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1030 23:35:33.997734  232335 command_runner.go:130] > # Cgroup setting for conmon
	I1030 23:35:33.997752  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1030 23:35:33.997763  232335 command_runner.go:130] > conmon_cgroup = "pod"
	I1030 23:35:33.997775  232335 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1030 23:35:33.997789  232335 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1030 23:35:33.997804  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:35:33.997815  232335 command_runner.go:130] > conmon_env = [
	I1030 23:35:33.997826  232335 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 23:35:33.997835  232335 command_runner.go:130] > ]
	I1030 23:35:33.997846  232335 command_runner.go:130] > # Additional environment variables to set for all the
	I1030 23:35:33.997858  232335 command_runner.go:130] > # containers. These are overridden if set in the
	I1030 23:35:33.997870  232335 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1030 23:35:33.997881  232335 command_runner.go:130] > # default_env = [
	I1030 23:35:33.997888  232335 command_runner.go:130] > # ]
	I1030 23:35:33.997899  232335 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1030 23:35:33.997909  232335 command_runner.go:130] > # selinux = false
	I1030 23:35:33.997928  232335 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1030 23:35:33.997948  232335 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1030 23:35:33.997962  232335 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1030 23:35:33.997973  232335 command_runner.go:130] > # seccomp_profile = ""
	I1030 23:35:33.997985  232335 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1030 23:35:33.997999  232335 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1030 23:35:33.998013  232335 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1030 23:35:33.998024  232335 command_runner.go:130] > # which might increase security.
	I1030 23:35:33.998034  232335 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1030 23:35:33.998049  232335 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1030 23:35:33.998063  232335 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1030 23:35:33.998078  232335 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1030 23:35:33.998093  232335 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1030 23:35:33.998105  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:35:33.998117  232335 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1030 23:35:33.998131  232335 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1030 23:35:33.998142  232335 command_runner.go:130] > # the cgroup blockio controller.
	I1030 23:35:33.998153  232335 command_runner.go:130] > # blockio_config_file = ""
	I1030 23:35:33.998171  232335 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1030 23:35:33.998181  232335 command_runner.go:130] > # irqbalance daemon.
	I1030 23:35:33.998192  232335 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1030 23:35:33.998211  232335 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1030 23:35:33.998224  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:35:33.998236  232335 command_runner.go:130] > # rdt_config_file = ""
	I1030 23:35:33.998249  232335 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1030 23:35:33.998260  232335 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1030 23:35:33.998301  232335 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1030 23:35:33.998312  232335 command_runner.go:130] > # separate_pull_cgroup = ""
	I1030 23:35:33.998325  232335 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1030 23:35:33.998340  232335 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1030 23:35:33.998350  232335 command_runner.go:130] > # will be added.
	I1030 23:35:33.998362  232335 command_runner.go:130] > # default_capabilities = [
	I1030 23:35:33.998372  232335 command_runner.go:130] > # 	"CHOWN",
	I1030 23:35:33.998380  232335 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1030 23:35:33.998390  232335 command_runner.go:130] > # 	"FSETID",
	I1030 23:35:33.998401  232335 command_runner.go:130] > # 	"FOWNER",
	I1030 23:35:33.998413  232335 command_runner.go:130] > # 	"SETGID",
	I1030 23:35:33.998429  232335 command_runner.go:130] > # 	"SETUID",
	I1030 23:35:33.998439  232335 command_runner.go:130] > # 	"SETPCAP",
	I1030 23:35:33.998447  232335 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1030 23:35:33.998454  232335 command_runner.go:130] > # 	"KILL",
	I1030 23:35:33.998464  232335 command_runner.go:130] > # ]
	I1030 23:35:33.998476  232335 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1030 23:35:33.998490  232335 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:35:33.998501  232335 command_runner.go:130] > # default_sysctls = [
	I1030 23:35:33.998510  232335 command_runner.go:130] > # ]
	I1030 23:35:33.998519  232335 command_runner.go:130] > # List of devices on the host that a
	I1030 23:35:33.998533  232335 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1030 23:35:33.998544  232335 command_runner.go:130] > # allowed_devices = [
	I1030 23:35:33.998557  232335 command_runner.go:130] > # 	"/dev/fuse",
	I1030 23:35:33.998566  232335 command_runner.go:130] > # ]
	I1030 23:35:33.998577  232335 command_runner.go:130] > # List of additional devices. specified as
	I1030 23:35:33.998594  232335 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1030 23:35:33.998607  232335 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1030 23:35:33.998655  232335 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:35:33.998665  232335 command_runner.go:130] > # additional_devices = [
	I1030 23:35:33.998672  232335 command_runner.go:130] > # ]
	I1030 23:35:33.998682  232335 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1030 23:35:33.998693  232335 command_runner.go:130] > # cdi_spec_dirs = [
	I1030 23:35:33.998701  232335 command_runner.go:130] > # 	"/etc/cdi",
	I1030 23:35:33.998711  232335 command_runner.go:130] > # 	"/var/run/cdi",
	I1030 23:35:33.998721  232335 command_runner.go:130] > # ]
	I1030 23:35:33.998736  232335 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1030 23:35:33.998750  232335 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1030 23:35:33.998760  232335 command_runner.go:130] > # Defaults to false.
	I1030 23:35:33.998772  232335 command_runner.go:130] > # device_ownership_from_security_context = false
	I1030 23:35:33.998788  232335 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1030 23:35:33.998802  232335 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1030 23:35:33.998813  232335 command_runner.go:130] > # hooks_dir = [
	I1030 23:35:33.998824  232335 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1030 23:35:33.998831  232335 command_runner.go:130] > # ]
	I1030 23:35:33.998845  232335 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1030 23:35:33.998864  232335 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1030 23:35:33.998877  232335 command_runner.go:130] > # its default mounts from the following two files:
	I1030 23:35:33.998886  232335 command_runner.go:130] > #
	I1030 23:35:33.998899  232335 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1030 23:35:33.998914  232335 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1030 23:35:33.998927  232335 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1030 23:35:33.998936  232335 command_runner.go:130] > #
	I1030 23:35:33.998952  232335 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1030 23:35:33.998967  232335 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1030 23:35:33.998982  232335 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1030 23:35:33.998994  232335 command_runner.go:130] > #      only add mounts it finds in this file.
	I1030 23:35:33.999003  232335 command_runner.go:130] > #
	I1030 23:35:33.999012  232335 command_runner.go:130] > # default_mounts_file = ""
	I1030 23:35:33.999024  232335 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1030 23:35:33.999038  232335 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1030 23:35:33.999048  232335 command_runner.go:130] > pids_limit = 1024
	I1030 23:35:33.999063  232335 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1030 23:35:33.999078  232335 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1030 23:35:33.999097  232335 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1030 23:35:33.999115  232335 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1030 23:35:33.999125  232335 command_runner.go:130] > # log_size_max = -1
	I1030 23:35:33.999137  232335 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1030 23:35:33.999148  232335 command_runner.go:130] > # log_to_journald = false
	I1030 23:35:33.999160  232335 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1030 23:35:33.999173  232335 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1030 23:35:33.999186  232335 command_runner.go:130] > # Path to directory for container attach sockets.
	I1030 23:35:33.999198  232335 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1030 23:35:33.999211  232335 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1030 23:35:33.999222  232335 command_runner.go:130] > # bind_mount_prefix = ""
	I1030 23:35:33.999233  232335 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1030 23:35:33.999243  232335 command_runner.go:130] > # read_only = false
	I1030 23:35:33.999258  232335 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1030 23:35:33.999273  232335 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1030 23:35:33.999284  232335 command_runner.go:130] > # live configuration reload.
	I1030 23:35:33.999293  232335 command_runner.go:130] > # log_level = "info"
	I1030 23:35:33.999306  232335 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1030 23:35:33.999322  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:35:33.999332  232335 command_runner.go:130] > # log_filter = ""
	I1030 23:35:33.999344  232335 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1030 23:35:33.999359  232335 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1030 23:35:33.999369  232335 command_runner.go:130] > # separated by comma.
	I1030 23:35:33.999378  232335 command_runner.go:130] > # uid_mappings = ""
	I1030 23:35:33.999392  232335 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1030 23:35:33.999406  232335 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1030 23:35:33.999416  232335 command_runner.go:130] > # separated by comma.
	I1030 23:35:33.999424  232335 command_runner.go:130] > # gid_mappings = ""
	I1030 23:35:33.999460  232335 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1030 23:35:33.999474  232335 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:35:33.999488  232335 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:35:33.999499  232335 command_runner.go:130] > # minimum_mappable_uid = -1
	I1030 23:35:33.999511  232335 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1030 23:35:33.999525  232335 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:35:33.999537  232335 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:35:33.999548  232335 command_runner.go:130] > # minimum_mappable_gid = -1
	I1030 23:35:33.999568  232335 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1030 23:35:33.999582  232335 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1030 23:35:33.999596  232335 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1030 23:35:33.999606  232335 command_runner.go:130] > # ctr_stop_timeout = 30
	I1030 23:35:33.999618  232335 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1030 23:35:33.999632  232335 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1030 23:35:33.999644  232335 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1030 23:35:33.999657  232335 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1030 23:35:33.999667  232335 command_runner.go:130] > drop_infra_ctr = false
	I1030 23:35:33.999679  232335 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1030 23:35:33.999693  232335 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1030 23:35:33.999709  232335 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1030 23:35:33.999720  232335 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1030 23:35:33.999735  232335 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1030 23:35:33.999747  232335 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1030 23:35:33.999758  232335 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1030 23:35:33.999774  232335 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1030 23:35:33.999785  232335 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1030 23:35:33.999803  232335 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 23:35:33.999818  232335 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1030 23:35:33.999833  232335 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1030 23:35:33.999844  232335 command_runner.go:130] > # default_runtime = "runc"
	I1030 23:35:33.999860  232335 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1030 23:35:33.999884  232335 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1030 23:35:33.999903  232335 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1030 23:35:33.999915  232335 command_runner.go:130] > # creation as a file is not desired either.
	I1030 23:35:33.999933  232335 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1030 23:35:33.999950  232335 command_runner.go:130] > # the hostname is being managed dynamically.
	I1030 23:35:33.999962  232335 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1030 23:35:33.999971  232335 command_runner.go:130] > # ]
	I1030 23:35:33.999983  232335 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1030 23:35:33.999998  232335 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1030 23:35:34.000013  232335 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1030 23:35:34.000027  232335 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1030 23:35:34.000036  232335 command_runner.go:130] > #
	I1030 23:35:34.000047  232335 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1030 23:35:34.000063  232335 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1030 23:35:34.000075  232335 command_runner.go:130] > #  runtime_type = "oci"
	I1030 23:35:34.000084  232335 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1030 23:35:34.000099  232335 command_runner.go:130] > #  privileged_without_host_devices = false
	I1030 23:35:34.000110  232335 command_runner.go:130] > #  allowed_annotations = []
	I1030 23:35:34.000120  232335 command_runner.go:130] > # Where:
	I1030 23:35:34.000130  232335 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1030 23:35:34.000144  232335 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1030 23:35:34.000159  232335 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1030 23:35:34.000173  232335 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1030 23:35:34.000183  232335 command_runner.go:130] > #   in $PATH.
	I1030 23:35:34.000198  232335 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1030 23:35:34.000210  232335 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1030 23:35:34.000225  232335 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1030 23:35:34.000235  232335 command_runner.go:130] > #   state.
	I1030 23:35:34.000250  232335 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1030 23:35:34.000261  232335 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1030 23:35:34.000275  232335 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1030 23:35:34.000292  232335 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1030 23:35:34.000307  232335 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1030 23:35:34.000322  232335 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1030 23:35:34.000334  232335 command_runner.go:130] > #   The currently recognized values are:
	I1030 23:35:34.000351  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1030 23:35:34.000385  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1030 23:35:34.000399  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1030 23:35:34.000413  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1030 23:35:34.000430  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1030 23:35:34.000445  232335 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1030 23:35:34.000458  232335 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1030 23:35:34.000473  232335 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1030 23:35:34.000486  232335 command_runner.go:130] > #   should be moved to the container's cgroup
	I1030 23:35:34.000497  232335 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1030 23:35:34.000509  232335 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1030 23:35:34.000520  232335 command_runner.go:130] > runtime_type = "oci"
	I1030 23:35:34.000532  232335 command_runner.go:130] > runtime_root = "/run/runc"
	I1030 23:35:34.000542  232335 command_runner.go:130] > runtime_config_path = ""
	I1030 23:35:34.000554  232335 command_runner.go:130] > monitor_path = ""
	I1030 23:35:34.000565  232335 command_runner.go:130] > monitor_cgroup = ""
	I1030 23:35:34.000574  232335 command_runner.go:130] > monitor_exec_cgroup = ""
	I1030 23:35:34.000589  232335 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1030 23:35:34.000600  232335 command_runner.go:130] > # running containers
	I1030 23:35:34.000611  232335 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1030 23:35:34.000624  232335 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1030 23:35:34.000686  232335 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1030 23:35:34.000698  232335 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1030 23:35:34.000709  232335 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1030 23:35:34.000721  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1030 23:35:34.000733  232335 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1030 23:35:34.000742  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1030 23:35:34.000751  232335 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1030 23:35:34.000762  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1030 23:35:34.000775  232335 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1030 23:35:34.000787  232335 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1030 23:35:34.000795  232335 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1030 23:35:34.000806  232335 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1030 23:35:34.000813  232335 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1030 23:35:34.000818  232335 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1030 23:35:34.000827  232335 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1030 23:35:34.000837  232335 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1030 23:35:34.000842  232335 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1030 23:35:34.000849  232335 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1030 23:35:34.000852  232335 command_runner.go:130] > # Example:
	I1030 23:35:34.000858  232335 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1030 23:35:34.000865  232335 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1030 23:35:34.000873  232335 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1030 23:35:34.000882  232335 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1030 23:35:34.000893  232335 command_runner.go:130] > # cpuset = 0
	I1030 23:35:34.000900  232335 command_runner.go:130] > # cpushares = "0-1"
	I1030 23:35:34.000906  232335 command_runner.go:130] > # Where:
	I1030 23:35:34.000917  232335 command_runner.go:130] > # The workload name is workload-type.
	I1030 23:35:34.000933  232335 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1030 23:35:34.000964  232335 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1030 23:35:34.000982  232335 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1030 23:35:34.000998  232335 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1030 23:35:34.001011  232335 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1030 23:35:34.001019  232335 command_runner.go:130] > # 
	I1030 23:35:34.001026  232335 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1030 23:35:34.001031  232335 command_runner.go:130] > #
	I1030 23:35:34.001037  232335 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1030 23:35:34.001045  232335 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1030 23:35:34.001053  232335 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1030 23:35:34.001060  232335 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1030 23:35:34.001068  232335 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1030 23:35:34.001072  232335 command_runner.go:130] > [crio.image]
	I1030 23:35:34.001078  232335 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1030 23:35:34.001085  232335 command_runner.go:130] > # default_transport = "docker://"
	I1030 23:35:34.001092  232335 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1030 23:35:34.001106  232335 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:35:34.001117  232335 command_runner.go:130] > # global_auth_file = ""
	I1030 23:35:34.001129  232335 command_runner.go:130] > # The image used to instantiate infra containers.
	I1030 23:35:34.001144  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:35:34.001156  232335 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1030 23:35:34.001169  232335 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1030 23:35:34.001177  232335 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:35:34.001195  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:35:34.001202  232335 command_runner.go:130] > # pause_image_auth_file = ""
	I1030 23:35:34.001208  232335 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1030 23:35:34.001216  232335 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1030 23:35:34.001224  232335 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1030 23:35:34.001230  232335 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1030 23:35:34.001236  232335 command_runner.go:130] > # pause_command = "/pause"
	I1030 23:35:34.001242  232335 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1030 23:35:34.001250  232335 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1030 23:35:34.001258  232335 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1030 23:35:34.001266  232335 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1030 23:35:34.001274  232335 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1030 23:35:34.001278  232335 command_runner.go:130] > # signature_policy = ""
	I1030 23:35:34.001284  232335 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1030 23:35:34.001292  232335 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1030 23:35:34.001296  232335 command_runner.go:130] > # changing them here.
	I1030 23:35:34.001300  232335 command_runner.go:130] > # insecure_registries = [
	I1030 23:35:34.001303  232335 command_runner.go:130] > # ]
	I1030 23:35:34.001309  232335 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1030 23:35:34.001314  232335 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1030 23:35:34.001318  232335 command_runner.go:130] > # image_volumes = "mkdir"
	I1030 23:35:34.001340  232335 command_runner.go:130] > # Temporary directory to use for storing big files
	I1030 23:35:34.001350  232335 command_runner.go:130] > # big_files_temporary_dir = ""
	I1030 23:35:34.001356  232335 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1030 23:35:34.001362  232335 command_runner.go:130] > # CNI plugins.
	I1030 23:35:34.001367  232335 command_runner.go:130] > [crio.network]
	I1030 23:35:34.001377  232335 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1030 23:35:34.001384  232335 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1030 23:35:34.001388  232335 command_runner.go:130] > # cni_default_network = ""
	I1030 23:35:34.001397  232335 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1030 23:35:34.001404  232335 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1030 23:35:34.001412  232335 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1030 23:35:34.001419  232335 command_runner.go:130] > # plugin_dirs = [
	I1030 23:35:34.001425  232335 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1030 23:35:34.001429  232335 command_runner.go:130] > # ]
	I1030 23:35:34.001437  232335 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1030 23:35:34.001451  232335 command_runner.go:130] > [crio.metrics]
	I1030 23:35:34.001458  232335 command_runner.go:130] > # Globally enable or disable metrics support.
	I1030 23:35:34.001465  232335 command_runner.go:130] > enable_metrics = true
	I1030 23:35:34.001469  232335 command_runner.go:130] > # Specify enabled metrics collectors.
	I1030 23:35:34.001480  232335 command_runner.go:130] > # Per default all metrics are enabled.
	I1030 23:35:34.001494  232335 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1030 23:35:34.001507  232335 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1030 23:35:34.001516  232335 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1030 23:35:34.001520  232335 command_runner.go:130] > # metrics_collectors = [
	I1030 23:35:34.001527  232335 command_runner.go:130] > # 	"operations",
	I1030 23:35:34.001532  232335 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1030 23:35:34.001539  232335 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1030 23:35:34.001543  232335 command_runner.go:130] > # 	"operations_errors",
	I1030 23:35:34.001550  232335 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1030 23:35:34.001559  232335 command_runner.go:130] > # 	"image_pulls_by_name",
	I1030 23:35:34.001567  232335 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1030 23:35:34.001571  232335 command_runner.go:130] > # 	"image_pulls_failures",
	I1030 23:35:34.001576  232335 command_runner.go:130] > # 	"image_pulls_successes",
	I1030 23:35:34.001580  232335 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1030 23:35:34.001587  232335 command_runner.go:130] > # 	"image_layer_reuse",
	I1030 23:35:34.001591  232335 command_runner.go:130] > # 	"containers_oom_total",
	I1030 23:35:34.001598  232335 command_runner.go:130] > # 	"containers_oom",
	I1030 23:35:34.001602  232335 command_runner.go:130] > # 	"processes_defunct",
	I1030 23:35:34.001608  232335 command_runner.go:130] > # 	"operations_total",
	I1030 23:35:34.001613  232335 command_runner.go:130] > # 	"operations_latency_seconds",
	I1030 23:35:34.001619  232335 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1030 23:35:34.001624  232335 command_runner.go:130] > # 	"operations_errors_total",
	I1030 23:35:34.001630  232335 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1030 23:35:34.001635  232335 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1030 23:35:34.001642  232335 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1030 23:35:34.001646  232335 command_runner.go:130] > # 	"image_pulls_success_total",
	I1030 23:35:34.001652  232335 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1030 23:35:34.001661  232335 command_runner.go:130] > # 	"containers_oom_count_total",
	I1030 23:35:34.001666  232335 command_runner.go:130] > # ]
	I1030 23:35:34.001672  232335 command_runner.go:130] > # The port on which the metrics server will listen.
	I1030 23:35:34.001678  232335 command_runner.go:130] > # metrics_port = 9090
	I1030 23:35:34.001683  232335 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1030 23:35:34.001690  232335 command_runner.go:130] > # metrics_socket = ""
	I1030 23:35:34.001695  232335 command_runner.go:130] > # The certificate for the secure metrics server.
	I1030 23:35:34.001703  232335 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1030 23:35:34.001709  232335 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1030 23:35:34.001716  232335 command_runner.go:130] > # certificate on any modification event.
	I1030 23:35:34.001720  232335 command_runner.go:130] > # metrics_cert = ""
	I1030 23:35:34.001727  232335 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1030 23:35:34.001732  232335 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1030 23:35:34.001739  232335 command_runner.go:130] > # metrics_key = ""
	I1030 23:35:34.001744  232335 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1030 23:35:34.001750  232335 command_runner.go:130] > [crio.tracing]
	I1030 23:35:34.001758  232335 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1030 23:35:34.001764  232335 command_runner.go:130] > # enable_tracing = false
	I1030 23:35:34.001772  232335 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1030 23:35:34.001779  232335 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1030 23:35:34.001784  232335 command_runner.go:130] > # Number of samples to collect per million spans.
	I1030 23:35:34.001792  232335 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1030 23:35:34.001798  232335 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1030 23:35:34.001804  232335 command_runner.go:130] > [crio.stats]
	I1030 23:35:34.001810  232335 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1030 23:35:34.001817  232335 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1030 23:35:34.001824  232335 command_runner.go:130] > # stats_collection_period = 0
	I1030 23:35:34.001865  232335 command_runner.go:130] ! time="2023-10-30 23:35:33.941514066Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1030 23:35:34.001877  232335 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1030 23:35:34.001953  232335 cni.go:84] Creating CNI manager for ""
	I1030 23:35:34.001963  232335 cni.go:136] 3 nodes found, recommending kindnet
	I1030 23:35:34.001983  232335 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1030 23:35:34.002006  232335 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-370491 NodeName:multinode-370491 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 23:35:34.002163  232335 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-370491"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 23:35:34.002243  232335 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-370491 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1030 23:35:34.002296  232335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1030 23:35:34.011664  232335 command_runner.go:130] > kubeadm
	I1030 23:35:34.011683  232335 command_runner.go:130] > kubectl
	I1030 23:35:34.011689  232335 command_runner.go:130] > kubelet
	I1030 23:35:34.011719  232335 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 23:35:34.011769  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1030 23:35:34.020000  232335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1030 23:35:34.035422  232335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 23:35:34.052728  232335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1030 23:35:34.070555  232335 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I1030 23:35:34.074256  232335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1030 23:35:34.087966  232335 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491 for IP: 192.168.39.231
	I1030 23:35:34.088039  232335 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:35:34.088213  232335 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1030 23:35:34.088269  232335 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1030 23:35:34.088354  232335 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key
	I1030 23:35:34.088431  232335 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key.cabadef2
	I1030 23:35:34.088523  232335 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.key
	I1030 23:35:34.088541  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1030 23:35:34.088565  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1030 23:35:34.088585  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1030 23:35:34.088610  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1030 23:35:34.088628  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 23:35:34.088648  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 23:35:34.088670  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 23:35:34.088688  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 23:35:34.088758  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1030 23:35:34.088802  232335 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1030 23:35:34.088821  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 23:35:34.088857  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1030 23:35:34.088900  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1030 23:35:34.088957  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1030 23:35:34.089016  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:35:34.089057  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem -> /usr/share/ca-certificates/216005.pem
	I1030 23:35:34.089078  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /usr/share/ca-certificates/2160052.pem
	I1030 23:35:34.089097  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:35:34.090085  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1030 23:35:34.116044  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1030 23:35:34.140799  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1030 23:35:34.165294  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1030 23:35:34.191096  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 23:35:34.215770  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 23:35:34.239169  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 23:35:34.263279  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1030 23:35:34.286946  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1030 23:35:34.310025  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1030 23:35:34.332887  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 23:35:34.357527  232335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1030 23:35:34.374114  232335 ssh_runner.go:195] Run: openssl version
	I1030 23:35:34.380138  232335 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1030 23:35:34.380267  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1030 23:35:34.391219  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1030 23:35:34.396583  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:35:34.396618  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:35:34.396666  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1030 23:35:34.402056  232335 command_runner.go:130] > 51391683
	I1030 23:35:34.402149  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1030 23:35:34.411814  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1030 23:35:34.421540  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1030 23:35:34.426015  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:35:34.426058  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:35:34.426098  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1030 23:35:34.431501  232335 command_runner.go:130] > 3ec20f2e
	I1030 23:35:34.431639  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 23:35:34.441752  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 23:35:34.451867  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:35:34.456522  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:35:34.456557  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:35:34.456613  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:35:34.461861  232335 command_runner.go:130] > b5213941
	I1030 23:35:34.462143  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 23:35:34.471688  232335 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1030 23:35:34.476082  232335 command_runner.go:130] > ca.crt
	I1030 23:35:34.476101  232335 command_runner.go:130] > ca.key
	I1030 23:35:34.476109  232335 command_runner.go:130] > healthcheck-client.crt
	I1030 23:35:34.476116  232335 command_runner.go:130] > healthcheck-client.key
	I1030 23:35:34.476131  232335 command_runner.go:130] > peer.crt
	I1030 23:35:34.476143  232335 command_runner.go:130] > peer.key
	I1030 23:35:34.476149  232335 command_runner.go:130] > server.crt
	I1030 23:35:34.476158  232335 command_runner.go:130] > server.key
	I1030 23:35:34.476224  232335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1030 23:35:34.481863  232335 command_runner.go:130] > Certificate will not expire
	I1030 23:35:34.481941  232335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1030 23:35:34.487166  232335 command_runner.go:130] > Certificate will not expire
	I1030 23:35:34.487481  232335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1030 23:35:34.492823  232335 command_runner.go:130] > Certificate will not expire
	I1030 23:35:34.493143  232335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1030 23:35:34.498424  232335 command_runner.go:130] > Certificate will not expire
	I1030 23:35:34.498812  232335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1030 23:35:34.503949  232335 command_runner.go:130] > Certificate will not expire
	I1030 23:35:34.504064  232335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1030 23:35:34.509730  232335 command_runner.go:130] > Certificate will not expire
	I1030 23:35:34.509793  232335 kubeadm.go:404] StartCluster: {Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:35:34.509907  232335 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1030 23:35:34.509955  232335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 23:35:34.549175  232335 cri.go:89] found id: ""
	I1030 23:35:34.549255  232335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1030 23:35:34.558488  232335 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1030 23:35:34.558516  232335 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1030 23:35:34.558528  232335 command_runner.go:130] > /var/lib/minikube/etcd:
	I1030 23:35:34.558558  232335 command_runner.go:130] > member
	I1030 23:35:34.558583  232335 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1030 23:35:34.558599  232335 kubeadm.go:636] restartCluster start
	I1030 23:35:34.558670  232335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1030 23:35:34.567170  232335 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:34.567805  232335 kubeconfig.go:92] found "multinode-370491" server: "https://192.168.39.231:8443"
	I1030 23:35:34.568277  232335 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:35:34.568643  232335 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:35:34.569329  232335 cert_rotation.go:137] Starting client certificate rotation controller
	I1030 23:35:34.569643  232335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1030 23:35:34.578481  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:34.578603  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:34.590232  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:34.590254  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:34.590312  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:34.600848  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:35.101591  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:35.101690  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:35.113959  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:35.601882  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:35.601986  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:35.613077  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:36.101736  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:36.101833  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:36.112790  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:36.601307  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:36.601463  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:36.613302  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:37.102009  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:37.102114  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:37.113268  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:37.601990  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:37.602070  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:37.612811  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:38.101355  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:38.101462  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:38.112398  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:38.601566  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:38.601671  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:38.613440  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:39.100994  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:39.101092  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:39.112139  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:39.601792  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:39.601882  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:39.613724  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:40.101030  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:40.101113  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:40.111984  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:40.601003  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:40.601098  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:40.612250  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:41.101821  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:41.101921  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:41.112746  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:41.601287  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:41.601369  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:41.612323  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:42.101945  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:42.102075  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:42.112847  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:42.601368  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:42.601468  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:42.612245  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:43.101852  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:43.102025  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:43.113341  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:43.601503  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:43.601590  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:43.612424  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:44.101004  232335 api_server.go:166] Checking apiserver status ...
	I1030 23:35:44.101106  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1030 23:35:44.112065  232335 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1030 23:35:44.578774  232335 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1030 23:35:44.578817  232335 kubeadm.go:1128] stopping kube-system containers ...
	I1030 23:35:44.578831  232335 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1030 23:35:44.578920  232335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1030 23:35:44.617217  232335 cri.go:89] found id: ""
	I1030 23:35:44.617284  232335 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1030 23:35:44.631695  232335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1030 23:35:44.640179  232335 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1030 23:35:44.640198  232335 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1030 23:35:44.640206  232335 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1030 23:35:44.640212  232335 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 23:35:44.640237  232335 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1030 23:35:44.640275  232335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1030 23:35:44.648452  232335 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1030 23:35:44.648482  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 23:35:44.761891  232335 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1030 23:35:44.761925  232335 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1030 23:35:44.761932  232335 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1030 23:35:44.761938  232335 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1030 23:35:44.761955  232335 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1030 23:35:44.761961  232335 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1030 23:35:44.761966  232335 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1030 23:35:44.761973  232335 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1030 23:35:44.761984  232335 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1030 23:35:44.761993  232335 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1030 23:35:44.761999  232335 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1030 23:35:44.762006  232335 command_runner.go:130] > [certs] Using the existing "sa" key
	I1030 23:35:44.762029  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 23:35:44.811298  232335 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1030 23:35:44.932992  232335 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1030 23:35:45.191642  232335 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1030 23:35:45.380121  232335 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1030 23:35:45.570495  232335 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1030 23:35:45.573400  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1030 23:35:45.768853  232335 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 23:35:45.768888  232335 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 23:35:45.768898  232335 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1030 23:35:45.768929  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 23:35:45.839039  232335 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1030 23:35:45.839071  232335 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1030 23:35:45.840574  232335 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1030 23:35:45.841561  232335 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1030 23:35:45.845314  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1030 23:35:45.915464  232335 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1030 23:35:45.915502  232335 api_server.go:52] waiting for apiserver process to appear ...
	I1030 23:35:45.915555  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:35:45.931148  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:35:46.445071  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:35:46.944862  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:35:47.445382  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:35:47.944596  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:35:48.444841  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:35:48.465771  232335 command_runner.go:130] > 1094
	I1030 23:35:48.466213  232335 api_server.go:72] duration metric: took 2.550703993s to wait for apiserver process to appear ...
	I1030 23:35:48.466237  232335 api_server.go:88] waiting for apiserver healthz status ...
	I1030 23:35:48.466257  232335 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1030 23:35:52.129401  232335 api_server.go:279] https://192.168.39.231:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1030 23:35:52.129435  232335 api_server.go:103] status: https://192.168.39.231:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1030 23:35:52.129446  232335 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1030 23:35:52.222271  232335 api_server.go:279] https://192.168.39.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1030 23:35:52.222347  232335 api_server.go:103] status: https://192.168.39.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1030 23:35:52.723052  232335 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1030 23:35:52.728078  232335 api_server.go:279] https://192.168.39.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1030 23:35:52.728105  232335 api_server.go:103] status: https://192.168.39.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1030 23:35:53.222678  232335 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1030 23:35:53.232782  232335 api_server.go:279] https://192.168.39.231:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1030 23:35:53.232820  232335 api_server.go:103] status: https://192.168.39.231:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1030 23:35:53.722913  232335 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1030 23:35:53.728119  232335 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I1030 23:35:53.728198  232335 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I1030 23:35:53.728208  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:53.728219  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:53.728231  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:53.737303  232335 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1030 23:35:53.737330  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:53.737342  232335 round_trippers.go:580]     Audit-Id: b08bd8f5-e415-4991-9655-45ccf3e95268
	I1030 23:35:53.737353  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:53.737363  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:53.737374  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:53.737414  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:53.737437  232335 round_trippers.go:580]     Content-Length: 264
	I1030 23:35:53.737450  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:53 GMT
	I1030 23:35:53.737505  232335 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1030 23:35:53.737626  232335 api_server.go:141] control plane version: v1.28.3
	I1030 23:35:53.737648  232335 api_server.go:131] duration metric: took 5.271403201s to wait for apiserver health ...
	I1030 23:35:53.737661  232335 cni.go:84] Creating CNI manager for ""
	I1030 23:35:53.737672  232335 cni.go:136] 3 nodes found, recommending kindnet
	I1030 23:35:53.739604  232335 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1030 23:35:53.741182  232335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 23:35:53.750357  232335 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1030 23:35:53.750381  232335 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1030 23:35:53.750399  232335 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1030 23:35:53.750411  232335 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:35:53.750420  232335 command_runner.go:130] > Access: 2023-10-30 23:35:21.496527687 +0000
	I1030 23:35:53.750433  232335 command_runner.go:130] > Modify: 2023-10-30 22:33:43.000000000 +0000
	I1030 23:35:53.750447  232335 command_runner.go:130] > Change: 2023-10-30 23:35:19.562527687 +0000
	I1030 23:35:53.750457  232335 command_runner.go:130] >  Birth: -
	I1030 23:35:53.752377  232335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1030 23:35:53.752391  232335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1030 23:35:53.784251  232335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 23:35:54.913488  232335 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1030 23:35:54.917707  232335 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1030 23:35:54.923331  232335 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1030 23:35:54.947361  232335 command_runner.go:130] > daemonset.apps/kindnet configured
	I1030 23:35:54.952586  232335 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1682973s)
	I1030 23:35:54.952621  232335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 23:35:54.952751  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:35:54.952767  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:54.952779  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:54.952792  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:54.964742  232335 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1030 23:35:54.964766  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:54.964777  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:54 GMT
	I1030 23:35:54.964803  232335 round_trippers.go:580]     Audit-Id: a0859b0d-5383-49ea-8048-8af55f1d66d5
	I1030 23:35:54.964817  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:54.964827  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:54.964836  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:54.964847  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:54.966888  232335 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"758"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83136 chars]
	I1030 23:35:54.972144  232335 system_pods.go:59] 12 kube-system pods found
	I1030 23:35:54.972174  232335 system_pods.go:61] "coredns-5dd5756b68-6pgvt" [d854be1d-ae4e-420a-9853-253f0258915c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1030 23:35:54.972181  232335 system_pods.go:61] "etcd-multinode-370491" [eb24307f-f00b-4406-bb05-b18eafd0eca1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1030 23:35:54.972187  232335 system_pods.go:61] "kindnet-76g2q" [6f0bf1cd-7456-4578-acf0-6aa80be9db33] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1030 23:35:54.972192  232335 system_pods.go:61] "kindnet-m45c4" [6e2a0237-6787-4bba-b723-93eaf5ac3005] Running
	I1030 23:35:54.972197  232335 system_pods.go:61] "kindnet-m9f5k" [a79ceb52-48df-4240-9edc-05c81bf58f73] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1030 23:35:54.972205  232335 system_pods.go:61] "kube-apiserver-multinode-370491" [d1874c7c-46ee-42eb-a395-c0d0138b3422] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1030 23:35:54.972214  232335 system_pods.go:61] "kube-controller-manager-multinode-370491" [4da6c57f-cec4-498b-a390-3fa2f8619a0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1030 23:35:54.972223  232335 system_pods.go:61] "kube-proxy-g9wzd" [9bffc44c-9d7f-4d1c-82e7-f249c53bf452] Running
	I1030 23:35:54.972227  232335 system_pods.go:61] "kube-proxy-tv2b7" [d68314ab-5356-4cd6-a611-f3efd8b2d4e0] Running
	I1030 23:35:54.972231  232335 system_pods.go:61] "kube-proxy-xbsl5" [eb41a78a-bf80-4546-b7d6-423a8c3ad0e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1030 23:35:54.972240  232335 system_pods.go:61] "kube-scheduler-multinode-370491" [b71476bb-1843-4ff9-8639-40ae73b72c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1030 23:35:54.972247  232335 system_pods.go:61] "storage-provisioner" [6f2bbacd-e138-4f82-961e-76f1daf88ccd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1030 23:35:54.972255  232335 system_pods.go:74] duration metric: took 19.628022ms to wait for pod list to return data ...
	I1030 23:35:54.972267  232335 node_conditions.go:102] verifying NodePressure condition ...
	I1030 23:35:54.972321  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I1030 23:35:54.972328  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:54.972335  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:54.972341  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:54.976495  232335 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:35:54.976515  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:54.976525  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:54.976534  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:54.976542  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:54.976551  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:54.976560  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:54 GMT
	I1030 23:35:54.976581  232335 round_trippers.go:580]     Audit-Id: d04cc41a-ba54-4bdd-8bf2-abae44b8876b
	I1030 23:35:54.976842  232335 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"758"},"items":[{"metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 15258 chars]
	I1030 23:35:54.977734  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:35:54.977758  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:35:54.977812  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:35:54.977821  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:35:54.977827  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:35:54.977831  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:35:54.977838  232335 node_conditions.go:105] duration metric: took 5.567182ms to run NodePressure ...
	I1030 23:35:54.977855  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1030 23:35:55.177903  232335 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1030 23:35:55.237357  232335 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1030 23:35:55.238991  232335 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1030 23:35:55.239145  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1030 23:35:55.239160  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.239172  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.239181  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.244183  232335 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:35:55.244202  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.244208  232335 round_trippers.go:580]     Audit-Id: 8d441910-33bd-4c1c-b430-7310216436b2
	I1030 23:35:55.244219  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.244230  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.244249  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.244259  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.244264  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.245473  232335 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"762"},"items":[{"metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"754","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1030 23:35:55.246746  232335 kubeadm.go:787] kubelet initialised
	I1030 23:35:55.246763  232335 kubeadm.go:788] duration metric: took 7.750882ms waiting for restarted kubelet to initialise ...
	I1030 23:35:55.246771  232335 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:35:55.246828  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:35:55.246836  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.246844  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.246852  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.252467  232335 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 23:35:55.252485  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.252494  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.252502  232335 round_trippers.go:580]     Audit-Id: c256d4e2-71a3-48d9-8973-3965d75ad44d
	I1030 23:35:55.252509  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.252518  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.252526  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.252537  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.253770  232335 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"762"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82635 chars]
	I1030 23:35:55.256164  232335 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:55.256239  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:35:55.256247  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.256254  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.256260  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.258229  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:35:55.258241  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.258247  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.258252  232335 round_trippers.go:580]     Audit-Id: 53291b90-47d0-4ea7-96a9-9e96247f386a
	I1030 23:35:55.258257  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.258262  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.258266  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.258271  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.258532  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:35:55.258935  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:55.258947  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.258954  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.258962  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.260625  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:35:55.260640  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.260646  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.260651  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.260656  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.260661  232335 round_trippers.go:580]     Audit-Id: 5b881ebe-d797-425c-84c8-c8e0cb35a3e3
	I1030 23:35:55.260666  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.260673  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.261058  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:55.261475  232335 pod_ready.go:97] node "multinode-370491" hosting pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:55.261496  232335 pod_ready.go:81] duration metric: took 5.313121ms waiting for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	E1030 23:35:55.261504  232335 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-370491" hosting pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:55.261520  232335 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:55.261566  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:35:55.261575  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.261585  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.261597  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.263312  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:35:55.263334  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.263344  232335 round_trippers.go:580]     Audit-Id: 6a1c6ab4-77b4-40d8-b581-43a8646de573
	I1030 23:35:55.263354  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.263366  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.263379  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.263394  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.263407  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.263650  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"754","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1030 23:35:55.264090  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:55.264107  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.264117  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.264129  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.265977  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:35:55.265994  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.266002  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.266015  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.266028  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.266041  232335 round_trippers.go:580]     Audit-Id: b5cf7cb1-e27a-4317-a34a-efdd215fce99
	I1030 23:35:55.266051  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.266058  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.266185  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:55.266588  232335 pod_ready.go:97] node "multinode-370491" hosting pod "etcd-multinode-370491" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:55.266610  232335 pod_ready.go:81] duration metric: took 5.083673ms waiting for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	E1030 23:35:55.266627  232335 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-370491" hosting pod "etcd-multinode-370491" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:55.266649  232335 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:55.266721  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:35:55.266733  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.266744  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.266759  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.268393  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:35:55.268409  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.268415  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.268420  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.268425  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.268430  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.268435  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.268440  232335 round_trippers.go:580]     Audit-Id: b4fe72bd-b8a6-4a05-9503-d83462a80817
	I1030 23:35:55.268651  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-370491","namespace":"kube-system","uid":"d1874c7c-46ee-42eb-a395-c0d0138b3422","resourceVersion":"748","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.231:8443","kubernetes.io/config.hash":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.mirror":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.seen":"2023-10-30T23:25:35.493664410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1030 23:35:55.268993  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:55.269003  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.269010  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.269016  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.270846  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:35:55.272745  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.272759  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.272768  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.272776  232335 round_trippers.go:580]     Audit-Id: 994bc679-7063-4176-ae59-369083dfefdb
	I1030 23:35:55.272785  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.272807  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.272817  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.272945  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:55.273214  232335 pod_ready.go:97] node "multinode-370491" hosting pod "kube-apiserver-multinode-370491" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:55.273283  232335 pod_ready.go:81] duration metric: took 6.568584ms waiting for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	E1030 23:35:55.273299  232335 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-370491" hosting pod "kube-apiserver-multinode-370491" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:55.273309  232335 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:55.273378  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-370491
	I1030 23:35:55.273389  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.273399  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.273409  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.275138  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:35:55.275151  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.275157  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.275162  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.275166  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.275171  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.275176  232335 round_trippers.go:580]     Audit-Id: 257801a6-70aa-4db3-9ff7-baa9d5a8050f
	I1030 23:35:55.275196  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.275416  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-370491","namespace":"kube-system","uid":"4da6c57f-cec4-498b-a390-3fa2f8619a0b","resourceVersion":"749","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.mirror":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.seen":"2023-10-30T23:25:35.493665415Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1030 23:35:55.353012  232335 request.go:629] Waited for 77.247188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:55.353110  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:55.353120  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.353132  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.353148  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.356048  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:55.356071  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.356080  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.356087  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.356095  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.356102  232335 round_trippers.go:580]     Audit-Id: 60dbacbe-486d-47e8-9a0b-f4945db8f078
	I1030 23:35:55.356109  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.356117  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.356242  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:55.356601  232335 pod_ready.go:97] node "multinode-370491" hosting pod "kube-controller-manager-multinode-370491" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:55.356631  232335 pod_ready.go:81] duration metric: took 83.310849ms waiting for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	E1030 23:35:55.356651  232335 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-370491" hosting pod "kube-controller-manager-multinode-370491" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:55.356666  232335 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:55.553482  232335 request.go:629] Waited for 196.724367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:35:55.553558  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:35:55.553563  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.553572  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.553578  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.558329  232335 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:35:55.558359  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.558370  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.558379  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.558387  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.558394  232335 round_trippers.go:580]     Audit-Id: 16b6b561-93f6-4b5f-8503-72be47d0435f
	I1030 23:35:55.558402  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.558411  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.558600  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g9wzd","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bffc44c-9d7f-4d1c-82e7-f249c53bf452","resourceVersion":"485","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1030 23:35:55.753598  232335 request.go:629] Waited for 194.413691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:35:55.753694  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:35:55.753712  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.753721  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.753727  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.756498  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:55.756519  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.756539  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.756547  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.756555  232335 round_trippers.go:580]     Audit-Id: d1755bfe-a774-4ef5-b5ff-4d36db62952a
	I1030 23:35:55.756563  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.756576  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.756585  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.757069  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"713","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I1030 23:35:55.757361  232335 pod_ready.go:92] pod "kube-proxy-g9wzd" in "kube-system" namespace has status "Ready":"True"
	I1030 23:35:55.757380  232335 pod_ready.go:81] duration metric: took 400.706043ms waiting for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:55.757394  232335 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tv2b7" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:55.952780  232335 request.go:629] Waited for 195.317465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:35:55.952891  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:35:55.952907  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:55.952957  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:55.952970  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:55.955958  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:55.955984  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:55.955997  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:55.956007  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:55.956014  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:55.956022  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:55 GMT
	I1030 23:35:55.956030  232335 round_trippers.go:580]     Audit-Id: 40d361fc-cb81-40ea-aa20-8ab4292135dc
	I1030 23:35:55.956039  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:55.956231  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tv2b7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68314ab-5356-4cd6-a611-f3efd8b2d4e0","resourceVersion":"685","creationTimestamp":"2023-10-30T23:27:17Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1030 23:35:56.153186  232335 request.go:629] Waited for 196.371921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:35:56.153270  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:35:56.153280  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:56.153292  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:56.153305  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:56.155839  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:56.155866  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:56.155877  232335 round_trippers.go:580]     Audit-Id: 18937397-b391-401f-9fa1-1b721409bf49
	I1030 23:35:56.155886  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:56.155907  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:56.155916  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:56.155924  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:56.155935  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:56 GMT
	I1030 23:35:56.156223  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m03","uid":"5868a069-28a9-411e-b010-48ecb6a9e16b","resourceVersion":"705","creationTimestamp":"2023-10-30T23:27:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:27:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1030 23:35:56.156518  232335 pod_ready.go:92] pod "kube-proxy-tv2b7" in "kube-system" namespace has status "Ready":"True"
	I1030 23:35:56.156534  232335 pod_ready.go:81] duration metric: took 399.133155ms waiting for pod "kube-proxy-tv2b7" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:56.156545  232335 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:56.352927  232335 request.go:629] Waited for 196.297809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:35:56.353005  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:35:56.353010  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:56.353018  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:56.353024  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:56.356231  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:35:56.356256  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:56.356266  232335 round_trippers.go:580]     Audit-Id: 3c5d7490-e508-4666-9c99-cdb99f5f7114
	I1030 23:35:56.356274  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:56.356281  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:56.356292  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:56.356300  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:56.356307  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:56 GMT
	I1030 23:35:56.356497  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xbsl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1","resourceVersion":"760","creationTimestamp":"2023-10-30T23:25:47Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1030 23:35:56.553422  232335 request.go:629] Waited for 196.346461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:56.553533  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:56.553548  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:56.553557  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:56.553563  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:56.555960  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:56.555973  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:56.555979  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:56.555985  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:56.555990  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:56.555997  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:56.556006  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:56 GMT
	I1030 23:35:56.556014  232335 round_trippers.go:580]     Audit-Id: 68f03ee6-9aac-4d3d-93ef-d4a3cadc3ab1
	I1030 23:35:56.556584  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:56.556969  232335 pod_ready.go:97] node "multinode-370491" hosting pod "kube-proxy-xbsl5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:56.556996  232335 pod_ready.go:81] duration metric: took 400.439913ms waiting for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	E1030 23:35:56.557006  232335 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-370491" hosting pod "kube-proxy-xbsl5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:56.557015  232335 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:56.753489  232335 request.go:629] Waited for 196.393489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:35:56.753572  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:35:56.753582  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:56.753592  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:56.753602  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:56.756457  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:56.756473  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:56.756479  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:56.756485  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:56.756493  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:56.756500  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:56.756508  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:56 GMT
	I1030 23:35:56.756516  232335 round_trippers.go:580]     Audit-Id: 24b75aba-251a-4428-911f-74ae36a442b2
	I1030 23:35:56.756908  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-370491","namespace":"kube-system","uid":"b71476bb-1843-4ff9-8639-40ae73b72c8b","resourceVersion":"750","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.mirror":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.seen":"2023-10-30T23:25:35.493666103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1030 23:35:56.953817  232335 request.go:629] Waited for 196.395515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:56.953934  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:56.953948  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:56.953960  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:56.953969  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:56.956611  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:56.956629  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:56.956636  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:56.956642  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:56.956647  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:56.956652  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:56 GMT
	I1030 23:35:56.956657  232335 round_trippers.go:580]     Audit-Id: b7837596-8bbf-4b1f-a907-01916998e8a9
	I1030 23:35:56.956665  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:56.956988  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:56.957646  232335 pod_ready.go:97] node "multinode-370491" hosting pod "kube-scheduler-multinode-370491" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:56.957684  232335 pod_ready.go:81] duration metric: took 400.660108ms waiting for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	E1030 23:35:56.957698  232335 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-370491" hosting pod "kube-scheduler-multinode-370491" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-370491" has status "Ready":"False"
	I1030 23:35:56.957718  232335 pod_ready.go:38] duration metric: took 1.71093805s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:35:56.957739  232335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1030 23:35:56.968699  232335 command_runner.go:130] > -16
	I1030 23:35:56.969032  232335 ops.go:34] apiserver oom_adj: -16
	I1030 23:35:56.969051  232335 kubeadm.go:640] restartCluster took 22.410443281s
	I1030 23:35:56.969061  232335 kubeadm.go:406] StartCluster complete in 22.459274648s
	I1030 23:35:56.969082  232335 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:35:56.969172  232335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:35:56.970287  232335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:35:56.970613  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1030 23:35:56.970809  232335 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1030 23:35:56.970949  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:35:56.973926  232335 out.go:177] * Enabled addons: 
	I1030 23:35:56.971098  232335 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:35:56.975328  232335 addons.go:502] enable addons completed in 4.509369ms: enabled=[]
	I1030 23:35:56.975571  232335 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:35:56.975911  232335 round_trippers.go:463] GET https://192.168.39.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1030 23:35:56.975924  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:56.975934  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:56.975942  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:56.978847  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:56.978867  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:56.978878  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:56.978887  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:56.978905  232335 round_trippers.go:580]     Content-Length: 291
	I1030 23:35:56.978918  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:56 GMT
	I1030 23:35:56.978938  232335 round_trippers.go:580]     Audit-Id: 6563f41b-46f2-437c-bb32-58bd26643d0c
	I1030 23:35:56.978950  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:56.978962  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:56.979000  232335 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d25ead-69ff-4f03-b32f-13c215a6d708","resourceVersion":"761","creationTimestamp":"2023-10-30T23:25:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1030 23:35:56.979173  232335 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-370491" context rescaled to 1 replicas
	I1030 23:35:56.979228  232335 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1030 23:35:56.981522  232335 out.go:177] * Verifying Kubernetes components...
	I1030 23:35:56.982758  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:35:57.077882  232335 command_runner.go:130] > apiVersion: v1
	I1030 23:35:57.077911  232335 command_runner.go:130] > data:
	I1030 23:35:57.077918  232335 command_runner.go:130] >   Corefile: |
	I1030 23:35:57.077934  232335 command_runner.go:130] >     .:53 {
	I1030 23:35:57.077943  232335 command_runner.go:130] >         log
	I1030 23:35:57.077949  232335 command_runner.go:130] >         errors
	I1030 23:35:57.077953  232335 command_runner.go:130] >         health {
	I1030 23:35:57.077961  232335 command_runner.go:130] >            lameduck 5s
	I1030 23:35:57.077967  232335 command_runner.go:130] >         }
	I1030 23:35:57.077976  232335 command_runner.go:130] >         ready
	I1030 23:35:57.077989  232335 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1030 23:35:57.077998  232335 command_runner.go:130] >            pods insecure
	I1030 23:35:57.078010  232335 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1030 23:35:57.078024  232335 command_runner.go:130] >            ttl 30
	I1030 23:35:57.078029  232335 command_runner.go:130] >         }
	I1030 23:35:57.078033  232335 command_runner.go:130] >         prometheus :9153
	I1030 23:35:57.078036  232335 command_runner.go:130] >         hosts {
	I1030 23:35:57.078042  232335 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1030 23:35:57.078049  232335 command_runner.go:130] >            fallthrough
	I1030 23:35:57.078053  232335 command_runner.go:130] >         }
	I1030 23:35:57.078057  232335 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1030 23:35:57.078070  232335 command_runner.go:130] >            max_concurrent 1000
	I1030 23:35:57.078082  232335 command_runner.go:130] >         }
	I1030 23:35:57.078090  232335 command_runner.go:130] >         cache 30
	I1030 23:35:57.078101  232335 command_runner.go:130] >         loop
	I1030 23:35:57.078110  232335 command_runner.go:130] >         reload
	I1030 23:35:57.078120  232335 command_runner.go:130] >         loadbalance
	I1030 23:35:57.078128  232335 command_runner.go:130] >     }
	I1030 23:35:57.078136  232335 command_runner.go:130] > kind: ConfigMap
	I1030 23:35:57.078146  232335 command_runner.go:130] > metadata:
	I1030 23:35:57.078152  232335 command_runner.go:130] >   creationTimestamp: "2023-10-30T23:25:35Z"
	I1030 23:35:57.078159  232335 command_runner.go:130] >   name: coredns
	I1030 23:35:57.078168  232335 command_runner.go:130] >   namespace: kube-system
	I1030 23:35:57.078181  232335 command_runner.go:130] >   resourceVersion: "364"
	I1030 23:35:57.078195  232335 command_runner.go:130] >   uid: d4073356-9e8a-4259-8732-9beb303b7aee
	I1030 23:35:57.080312  232335 node_ready.go:35] waiting up to 6m0s for node "multinode-370491" to be "Ready" ...
	I1030 23:35:57.080511  232335 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1030 23:35:57.153691  232335 request.go:629] Waited for 73.229893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:57.153784  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:57.153797  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:57.153815  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:57.153824  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:57.156267  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:57.156290  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:57.156301  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:57.156310  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:57 GMT
	I1030 23:35:57.156319  232335 round_trippers.go:580]     Audit-Id: 7549094d-16a2-4fa5-b895-508a41c6af6f
	I1030 23:35:57.156327  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:57.156334  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:57.156340  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:57.156692  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:57.353468  232335 request.go:629] Waited for 196.367392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:57.353559  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:57.353571  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:57.353579  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:57.353586  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:57.356227  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:57.356256  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:57.356271  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:57.356279  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:57.356287  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:57.356296  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:57 GMT
	I1030 23:35:57.356309  232335 round_trippers.go:580]     Audit-Id: 8180669a-e142-40e5-8bed-2f0044743e76
	I1030 23:35:57.356319  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:57.356962  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:57.858119  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:57.858146  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:57.858155  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:57.858161  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:57.860556  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:57.860576  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:57.860583  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:57.860588  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:57.860593  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:57.860598  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:57 GMT
	I1030 23:35:57.860603  232335 round_trippers.go:580]     Audit-Id: 5ecd8d1b-4893-4d4a-86fa-706f11dd0136
	I1030 23:35:57.860608  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:57.861055  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:58.357579  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:58.357601  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:58.357610  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:58.357615  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:58.364693  232335 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1030 23:35:58.364720  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:58.364730  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:58.364735  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:58.364740  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:58 GMT
	I1030 23:35:58.364746  232335 round_trippers.go:580]     Audit-Id: 71b8bdb3-58fa-459b-bc95-4b92f007d209
	I1030 23:35:58.364751  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:58.364774  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:58.365415  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"711","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6125 chars]
	I1030 23:35:58.858488  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:58.858511  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:58.858520  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:58.858526  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:58.861125  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:58.861151  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:58.861160  232335 round_trippers.go:580]     Audit-Id: 5f0e010a-d2c4-4900-95c6-df7c5d586569
	I1030 23:35:58.861168  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:58.861179  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:58.861190  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:58.861201  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:58.861211  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:58 GMT
	I1030 23:35:58.861566  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:35:58.861921  232335 node_ready.go:49] node "multinode-370491" has status "Ready":"True"
	I1030 23:35:58.861949  232335 node_ready.go:38] duration metric: took 1.781601154s waiting for node "multinode-370491" to be "Ready" ...
	I1030 23:35:58.861962  232335 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:35:58.862029  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:35:58.862041  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:58.862049  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:58.862057  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:58.865752  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:35:58.865773  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:58.865782  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:58.865791  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:58 GMT
	I1030 23:35:58.865799  232335 round_trippers.go:580]     Audit-Id: 7bb959fd-a679-434b-97b5-6c11d69ce97e
	I1030 23:35:58.865807  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:58.865819  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:58.865830  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:58.868210  232335 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"825"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82954 chars]
	I1030 23:35:58.870730  232335 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:35:58.870798  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:35:58.870808  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:58.870816  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:58.870821  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:58.873250  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:58.873273  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:58.873283  232335 round_trippers.go:580]     Audit-Id: 6b4290b6-7921-45b7-a765-a3ae2e791fe2
	I1030 23:35:58.873289  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:58.873295  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:58.873302  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:58.873308  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:58.873316  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:58 GMT
	I1030 23:35:58.873603  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:35:58.874119  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:58.874133  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:58.874141  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:58.874151  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:58.876003  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:35:58.876017  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:58.876023  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:58.876028  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:58 GMT
	I1030 23:35:58.876033  232335 round_trippers.go:580]     Audit-Id: 6294e472-0220-4301-82e5-32db2b09b61a
	I1030 23:35:58.876038  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:58.876043  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:58.876048  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:58.876227  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:35:58.876724  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:35:58.876747  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:58.876758  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:58.876767  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:58.878793  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:58.878812  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:58.878827  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:58.878836  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:58.878844  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:58.878853  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:58 GMT
	I1030 23:35:58.878862  232335 round_trippers.go:580]     Audit-Id: a73d03f2-51ec-47c8-ac00-91c30e7b1f48
	I1030 23:35:58.878874  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:58.879066  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:35:58.952764  232335 request.go:629] Waited for 73.169577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:58.952845  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:58.952856  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:58.952864  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:58.952870  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:58.955648  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:58.955672  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:58.955682  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:58.955690  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:58.955695  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:58 GMT
	I1030 23:35:58.955700  232335 round_trippers.go:580]     Audit-Id: 3e5b800d-b771-493b-a74e-d159824aa478
	I1030 23:35:58.955709  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:58.955720  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:58.955874  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:35:59.457025  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:35:59.457049  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:59.457059  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:59.457065  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:59.460417  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:35:59.460440  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:59.460449  232335 round_trippers.go:580]     Audit-Id: d5384e03-6cbc-4441-a14b-54a3aac3d4cf
	I1030 23:35:59.460454  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:59.460467  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:59.460472  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:59.460477  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:59.460483  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:59 GMT
	I1030 23:35:59.461158  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:35:59.461752  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:59.461769  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:59.461780  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:59.461790  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:59.463930  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:59.463950  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:59.463960  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:59.463969  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:59.463978  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:59.463990  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:59 GMT
	I1030 23:35:59.463999  232335 round_trippers.go:580]     Audit-Id: b019b476-9763-4600-b418-60fc0c747849
	I1030 23:35:59.464019  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:59.464277  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:35:59.957058  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:35:59.957082  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:59.957091  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:59.957097  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:59.959900  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:59.959919  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:59.959926  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:59 GMT
	I1030 23:35:59.959932  232335 round_trippers.go:580]     Audit-Id: d05dcfc6-8a27-485a-8b52-45f78811737f
	I1030 23:35:59.959937  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:59.959943  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:59.959954  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:59.959962  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:59.960234  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:35:59.960742  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:35:59.960757  232335 round_trippers.go:469] Request Headers:
	I1030 23:35:59.960765  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:35:59.960771  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:35:59.963195  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:35:59.963216  232335 round_trippers.go:577] Response Headers:
	I1030 23:35:59.963226  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:35:59.963235  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:35:59.963243  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:35:59.963251  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:35:59 GMT
	I1030 23:35:59.963260  232335 round_trippers.go:580]     Audit-Id: 004eda1c-8e0b-46df-ba78-400f5f7d7f57
	I1030 23:35:59.963271  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:35:59.963535  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:00.456764  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:36:00.456789  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:00.456799  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:00.456810  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:00.459827  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:00.459853  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:00.459860  232335 round_trippers.go:580]     Audit-Id: d53a9ee5-3ccd-4d41-907c-9c6f38f46062
	I1030 23:36:00.459866  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:00.459871  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:00.459876  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:00.459881  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:00.459886  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:00 GMT
	I1030 23:36:00.460176  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:36:00.460746  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:00.460763  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:00.460770  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:00.460776  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:00.463061  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:00.463081  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:00.463090  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:00.463098  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:00.463106  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:00.463114  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:00 GMT
	I1030 23:36:00.463122  232335 round_trippers.go:580]     Audit-Id: 6c048f59-84c2-473f-af8f-21c7f16d1aaa
	I1030 23:36:00.463134  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:00.463291  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:00.956944  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:36:00.956970  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:00.956978  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:00.956984  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:00.963155  232335 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1030 23:36:00.963184  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:00.963194  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:00.963203  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:00.963217  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:00.963230  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:00 GMT
	I1030 23:36:00.963238  232335 round_trippers.go:580]     Audit-Id: 77ff90ce-eda5-4736-879b-2159d31c811d
	I1030 23:36:00.963246  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:00.963487  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:36:00.964157  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:00.964177  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:00.964189  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:00.964198  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:00.972886  232335 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1030 23:36:00.972907  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:00.972914  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:00.972920  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:00.972925  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:00.972954  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:00 GMT
	I1030 23:36:00.972968  232335 round_trippers.go:580]     Audit-Id: 3496719a-b22c-4624-bfd9-61109d2abba2
	I1030 23:36:00.972981  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:00.973210  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:00.973701  232335 pod_ready.go:102] pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace has status "Ready":"False"
	I1030 23:36:01.456429  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:36:01.456464  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:01.456476  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:01.456483  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:01.460611  232335 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:36:01.460641  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:01.460650  232335 round_trippers.go:580]     Audit-Id: d6c95ebc-4702-4088-8379-f75157f845dc
	I1030 23:36:01.460669  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:01.460677  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:01.460686  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:01.460700  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:01.460715  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:01 GMT
	I1030 23:36:01.460907  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:36:01.461438  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:01.461455  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:01.461462  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:01.461470  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:01.464963  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:36:01.464986  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:01.464998  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:01 GMT
	I1030 23:36:01.465004  232335 round_trippers.go:580]     Audit-Id: 949f80b9-9180-4172-9b68-c12f66d90d4f
	I1030 23:36:01.465009  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:01.465014  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:01.465019  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:01.465024  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:01.465170  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:01.956838  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:36:01.956860  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:01.956869  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:01.956875  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:01.959548  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:01.959567  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:01.959576  232335 round_trippers.go:580]     Audit-Id: bb1fd4e1-281e-4637-b3d2-4dc1abb705e9
	I1030 23:36:01.959583  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:01.959591  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:01.959598  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:01.959608  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:01.959620  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:01 GMT
	I1030 23:36:01.959931  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"755","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1030 23:36:01.960390  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:01.960403  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:01.960410  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:01.960416  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:01.962891  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:01.962911  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:01.962920  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:01.962930  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:01 GMT
	I1030 23:36:01.962938  232335 round_trippers.go:580]     Audit-Id: dd35b4b0-6c44-4f30-a0e5-a9a2416b7b43
	I1030 23:36:01.962946  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:01.962954  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:01.962961  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:01.963206  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:02.456478  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:36:02.456506  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:02.456514  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:02.456520  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:02.458971  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:02.458998  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:02.459008  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:02 GMT
	I1030 23:36:02.459016  232335 round_trippers.go:580]     Audit-Id: 90f9ce14-185c-46c7-944c-f2fb42f9d3ba
	I1030 23:36:02.459023  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:02.459031  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:02.459038  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:02.459046  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:02.459922  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"833","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1030 23:36:02.460534  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:02.460550  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:02.460558  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:02.460563  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:02.464422  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:36:02.464456  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:02.464466  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:02.464474  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:02 GMT
	I1030 23:36:02.464483  232335 round_trippers.go:580]     Audit-Id: 365692c9-e0a1-4966-92ac-6f314425e94b
	I1030 23:36:02.464494  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:02.464505  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:02.464515  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:02.464724  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:02.465155  232335 pod_ready.go:92] pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace has status "Ready":"True"
	I1030 23:36:02.465177  232335 pod_ready.go:81] duration metric: took 3.594426159s waiting for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:02.465187  232335 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:02.465246  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:36:02.465254  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:02.465260  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:02.465266  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:02.467002  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:36:02.467016  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:02.467031  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:02.467039  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:02.467053  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:02.467061  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:02.467074  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:02 GMT
	I1030 23:36:02.467088  232335 round_trippers.go:580]     Audit-Id: 4e7f4aaf-bc48-4f20-b6a1-fea8d2d230e3
	I1030 23:36:02.467299  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"754","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1030 23:36:02.467804  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:02.467819  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:02.467826  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:02.467832  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:02.469555  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:36:02.469571  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:02.469580  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:02 GMT
	I1030 23:36:02.469591  232335 round_trippers.go:580]     Audit-Id: d4d3c42b-6c97-44c1-b2ee-b983de0ea45d
	I1030 23:36:02.469603  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:02.469611  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:02.469616  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:02.469626  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:02.469821  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:02.470155  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:36:02.470169  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:02.470176  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:02.470182  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:02.471835  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:36:02.471853  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:02.471862  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:02.471870  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:02.471876  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:02.471885  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:02.471892  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:02 GMT
	I1030 23:36:02.471901  232335 round_trippers.go:580]     Audit-Id: feacf175-7aa8-4022-b2ea-74f24a50617d
	I1030 23:36:02.472058  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"754","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1030 23:36:02.553720  232335 request.go:629] Waited for 81.294041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:02.553804  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:02.553812  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:02.553833  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:02.553847  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:02.556831  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:02.556859  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:02.556871  232335 round_trippers.go:580]     Audit-Id: 7b33ce5f-a5a8-4bb1-8ed3-adc10f7306fe
	I1030 23:36:02.556884  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:02.556894  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:02.556911  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:02.556953  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:02.556964  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:02 GMT
	I1030 23:36:02.557148  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:03.058345  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:36:03.058374  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:03.058387  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:03.058396  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:03.061090  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:03.061114  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:03.061124  232335 round_trippers.go:580]     Audit-Id: 478a2ecf-d10e-4c63-bdf7-d34121c1e810
	I1030 23:36:03.061131  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:03.061138  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:03.061145  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:03.061152  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:03.061161  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:03 GMT
	I1030 23:36:03.061718  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"754","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1030 23:36:03.062142  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:03.062155  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:03.062165  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:03.062173  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:03.064292  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:03.064306  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:03.064313  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:03 GMT
	I1030 23:36:03.064318  232335 round_trippers.go:580]     Audit-Id: aee17c38-6f42-413b-bd82-72f79cb2e6b6
	I1030 23:36:03.064334  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:03.064346  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:03.064357  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:03.064366  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:03.064783  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:03.558676  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:36:03.558699  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:03.558707  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:03.558713  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:03.561106  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:03.561122  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:03.561128  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:03 GMT
	I1030 23:36:03.561134  232335 round_trippers.go:580]     Audit-Id: ce5ff04a-b7b8-4332-a0e7-ade523c5024d
	I1030 23:36:03.561142  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:03.561151  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:03.561164  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:03.561176  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:03.561331  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"754","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1030 23:36:03.561725  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:03.561740  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:03.561747  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:03.561753  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:03.564071  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:03.564095  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:03.564105  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:03.564118  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:03 GMT
	I1030 23:36:03.564128  232335 round_trippers.go:580]     Audit-Id: 6ebe7612-b98e-4ed4-b027-1ef752737078
	I1030 23:36:03.564141  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:03.564152  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:03.564168  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:03.564299  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:04.057911  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:36:04.057944  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:04.057956  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:04.057965  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:04.061036  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:36:04.061061  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:04.061071  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:04.061079  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:04.061087  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:04 GMT
	I1030 23:36:04.061095  232335 round_trippers.go:580]     Audit-Id: 98090a15-927c-44da-9b4b-fdfe128fafdf
	I1030 23:36:04.061104  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:04.061117  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:04.061332  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"844","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1030 23:36:04.061761  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:04.061777  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:04.061787  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:04.061800  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:04.063992  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:04.064013  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:04.064023  232335 round_trippers.go:580]     Audit-Id: 4d7999c1-ae02-43eb-abbf-964f0357d0c6
	I1030 23:36:04.064032  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:04.064040  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:04.064055  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:04.064069  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:04.064082  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:04 GMT
	I1030 23:36:04.064366  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:04.064767  232335 pod_ready.go:92] pod "etcd-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:36:04.064786  232335 pod_ready.go:81] duration metric: took 1.599592565s waiting for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:04.064804  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:04.064856  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:36:04.064863  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:04.064870  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:04.064876  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:04.066736  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:36:04.066752  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:04.066761  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:04.066769  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:04 GMT
	I1030 23:36:04.066779  232335 round_trippers.go:580]     Audit-Id: 2d899dd5-5730-497e-beb6-f386d17fa1c0
	I1030 23:36:04.066796  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:04.066809  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:04.066818  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:04.067130  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-370491","namespace":"kube-system","uid":"d1874c7c-46ee-42eb-a395-c0d0138b3422","resourceVersion":"748","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.231:8443","kubernetes.io/config.hash":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.mirror":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.seen":"2023-10-30T23:25:35.493664410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1030 23:36:04.152768  232335 request.go:629] Waited for 85.164467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:04.152831  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:04.152839  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:04.152851  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:04.152864  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:04.154960  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:04.154983  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:04.154993  232335 round_trippers.go:580]     Audit-Id: c178ea44-5942-4a46-9173-6143a444e599
	I1030 23:36:04.155003  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:04.155011  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:04.155019  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:04.155031  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:04.155042  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:04 GMT
	I1030 23:36:04.155788  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:04.353605  232335 request.go:629] Waited for 197.367707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:36:04.353684  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:36:04.353693  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:04.353706  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:04.353716  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:04.356322  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:04.356338  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:04.356345  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:04.356351  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:04.356358  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:04.356367  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:04 GMT
	I1030 23:36:04.356376  232335 round_trippers.go:580]     Audit-Id: 208bb7fe-40cf-432e-baa6-8071b811183d
	I1030 23:36:04.356387  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:04.356541  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-370491","namespace":"kube-system","uid":"d1874c7c-46ee-42eb-a395-c0d0138b3422","resourceVersion":"748","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.231:8443","kubernetes.io/config.hash":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.mirror":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.seen":"2023-10-30T23:25:35.493664410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1030 23:36:04.553366  232335 request.go:629] Waited for 196.335817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:04.553434  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:04.553439  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:04.553447  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:04.553453  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:04.555765  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:04.555784  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:04.555791  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:04.555796  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:04 GMT
	I1030 23:36:04.555801  232335 round_trippers.go:580]     Audit-Id: 8eb219a4-ff33-423a-b680-3fd40881a5f8
	I1030 23:36:04.555807  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:04.555816  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:04.555824  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:04.555980  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:05.056680  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:36:05.056710  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:05.056720  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:05.056729  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:05.059372  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:05.059400  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:05.059407  232335 round_trippers.go:580]     Audit-Id: d3ad015f-bcbf-4ddf-9479-5bb36ab41252
	I1030 23:36:05.059415  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:05.059423  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:05.059431  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:05.059440  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:05.059449  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:05 GMT
	I1030 23:36:05.059617  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-370491","namespace":"kube-system","uid":"d1874c7c-46ee-42eb-a395-c0d0138b3422","resourceVersion":"846","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.231:8443","kubernetes.io/config.hash":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.mirror":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.seen":"2023-10-30T23:25:35.493664410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1030 23:36:05.060196  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:05.060212  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:05.060224  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:05.060233  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:05.062438  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:05.062456  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:05.062463  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:05.062469  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:05 GMT
	I1030 23:36:05.062478  232335 round_trippers.go:580]     Audit-Id: ca4a6ac7-3600-42c1-88ee-7d1085c52b38
	I1030 23:36:05.062485  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:05.062493  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:05.062501  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:05.062914  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:05.063231  232335 pod_ready.go:92] pod "kube-apiserver-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:36:05.063250  232335 pod_ready.go:81] duration metric: took 998.434333ms waiting for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:05.063260  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:05.153608  232335 request.go:629] Waited for 90.258243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-370491
	I1030 23:36:05.153693  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-370491
	I1030 23:36:05.153704  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:05.153712  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:05.153725  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:05.156080  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:05.156100  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:05.156109  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:05.156117  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:05.156125  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:05.156133  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:05 GMT
	I1030 23:36:05.156141  232335 round_trippers.go:580]     Audit-Id: 29148965-da91-42b2-94f3-fe2fbb185730
	I1030 23:36:05.156147  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:05.156417  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-370491","namespace":"kube-system","uid":"4da6c57f-cec4-498b-a390-3fa2f8619a0b","resourceVersion":"827","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.mirror":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.seen":"2023-10-30T23:25:35.493665415Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1030 23:36:05.352848  232335 request.go:629] Waited for 195.998378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:05.352906  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:05.352911  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:05.352918  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:05.352924  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:05.356451  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:36:05.356479  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:05.356486  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:05.356491  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:05.356496  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:05.356501  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:05.356507  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:05 GMT
	I1030 23:36:05.356512  232335 round_trippers.go:580]     Audit-Id: e54e4de1-f751-471a-800f-3b6d79aa2e4f
	I1030 23:36:05.356963  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:05.357302  232335 pod_ready.go:92] pod "kube-controller-manager-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:36:05.357318  232335 pod_ready.go:81] duration metric: took 294.048844ms waiting for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:05.357328  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:05.553327  232335 request.go:629] Waited for 195.906117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:36:05.553446  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:36:05.553460  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:05.553472  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:05.553483  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:05.556184  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:05.556206  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:05.556221  232335 round_trippers.go:580]     Audit-Id: 82b989c0-a7fb-4750-92a8-76ad35d2c850
	I1030 23:36:05.556227  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:05.556232  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:05.556237  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:05.556245  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:05.556253  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:05 GMT
	I1030 23:36:05.556653  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g9wzd","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bffc44c-9d7f-4d1c-82e7-f249c53bf452","resourceVersion":"485","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1030 23:36:05.753546  232335 request.go:629] Waited for 196.398719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:36:05.753639  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:36:05.753647  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:05.753659  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:05.753670  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:05.755949  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:05.755974  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:05.755984  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:05.755992  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:05 GMT
	I1030 23:36:05.756000  232335 round_trippers.go:580]     Audit-Id: dc20d7b3-3600-412c-a0af-2d151d0b4020
	I1030 23:36:05.756008  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:05.756016  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:05.756024  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:05.756498  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e","resourceVersion":"713","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I1030 23:36:05.756882  232335 pod_ready.go:92] pod "kube-proxy-g9wzd" in "kube-system" namespace has status "Ready":"True"
	I1030 23:36:05.756913  232335 pod_ready.go:81] duration metric: took 399.579736ms waiting for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:05.756926  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tv2b7" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:05.953410  232335 request.go:629] Waited for 196.395236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:36:05.953487  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:36:05.953492  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:05.953500  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:05.953506  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:05.955827  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:05.955852  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:05.955862  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:05.955875  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:05.955886  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:05 GMT
	I1030 23:36:05.955903  232335 round_trippers.go:580]     Audit-Id: b77ebff6-e782-4650-bf77-f2f1728331e6
	I1030 23:36:05.955910  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:05.955922  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:05.956319  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tv2b7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68314ab-5356-4cd6-a611-f3efd8b2d4e0","resourceVersion":"685","creationTimestamp":"2023-10-30T23:27:17Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1030 23:36:06.153225  232335 request.go:629] Waited for 196.370238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:36:06.153281  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:36:06.153286  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:06.153294  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:06.153301  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:06.156465  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:36:06.156484  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:06.156491  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:06.156496  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:06 GMT
	I1030 23:36:06.156501  232335 round_trippers.go:580]     Audit-Id: 0b166c8d-de49-48cf-b576-0568a85e88e8
	I1030 23:36:06.156506  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:06.156516  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:06.156522  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:06.157452  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m03","uid":"5868a069-28a9-411e-b010-48ecb6a9e16b","resourceVersion":"705","creationTimestamp":"2023-10-30T23:27:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:27:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1030 23:36:06.157792  232335 pod_ready.go:92] pod "kube-proxy-tv2b7" in "kube-system" namespace has status "Ready":"True"
	I1030 23:36:06.157810  232335 pod_ready.go:81] duration metric: took 400.877446ms waiting for pod "kube-proxy-tv2b7" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:06.157819  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:06.353287  232335 request.go:629] Waited for 195.373746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:36:06.353347  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:36:06.353355  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:06.353364  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:06.353370  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:06.356506  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:36:06.356532  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:06.356542  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:06.356552  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:06.356560  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:06.356567  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:06.356576  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:06 GMT
	I1030 23:36:06.356593  232335 round_trippers.go:580]     Audit-Id: ad02ab8b-cb6b-4ef1-9d38-486273747c0e
	I1030 23:36:06.356817  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xbsl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1","resourceVersion":"760","creationTimestamp":"2023-10-30T23:25:47Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1030 23:36:06.553702  232335 request.go:629] Waited for 196.337032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:06.553760  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:06.553765  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:06.553786  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:06.553793  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:06.556847  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:36:06.556872  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:06.556882  232335 round_trippers.go:580]     Audit-Id: 5dfa79a6-e305-47af-a2ff-2917f3fe2fc4
	I1030 23:36:06.556891  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:06.556899  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:06.556906  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:06.556915  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:06.556949  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:06 GMT
	I1030 23:36:06.557158  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:06.557625  232335 pod_ready.go:92] pod "kube-proxy-xbsl5" in "kube-system" namespace has status "Ready":"True"
	I1030 23:36:06.557650  232335 pod_ready.go:81] duration metric: took 399.82334ms waiting for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:06.557663  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:06.753125  232335 request.go:629] Waited for 195.366181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:36:06.753235  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:36:06.753248  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:06.753260  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:06.753272  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:06.756150  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:06.756384  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:06.756431  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:06.756452  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:06.756470  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:06.756488  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:06.756505  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:06 GMT
	I1030 23:36:06.756522  232335 round_trippers.go:580]     Audit-Id: 0e5f2038-b699-4110-a74f-acaca9d6ce79
	I1030 23:36:06.756670  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-370491","namespace":"kube-system","uid":"b71476bb-1843-4ff9-8639-40ae73b72c8b","resourceVersion":"855","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.mirror":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.seen":"2023-10-30T23:25:35.493666103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1030 23:36:06.953134  232335 request.go:629] Waited for 195.778492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:06.953216  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:36:06.953224  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:06.953232  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:06.953239  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:06.955773  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:06.955791  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:06.955798  232335 round_trippers.go:580]     Audit-Id: 71caf453-ead4-422c-aa33-0b8867878511
	I1030 23:36:06.955811  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:06.955836  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:06.955848  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:06.955854  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:06.955859  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:06 GMT
	I1030 23:36:06.956329  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 5949 chars]
	I1030 23:36:06.956763  232335 pod_ready.go:92] pod "kube-scheduler-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:36:06.956782  232335 pod_ready.go:81] duration metric: took 399.10577ms waiting for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:36:06.956795  232335 pod_ready.go:38] duration metric: took 8.09481978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:36:06.956818  232335 api_server.go:52] waiting for apiserver process to appear ...
	I1030 23:36:06.956876  232335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:36:06.974215  232335 command_runner.go:130] > 1094
	I1030 23:36:06.974405  232335 api_server.go:72] duration metric: took 9.995140034s to wait for apiserver process to appear ...
	I1030 23:36:06.974426  232335 api_server.go:88] waiting for apiserver healthz status ...
	I1030 23:36:06.974445  232335 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1030 23:36:06.982321  232335 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I1030 23:36:06.982393  232335 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I1030 23:36:06.982403  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:06.982410  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:06.982416  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:06.984328  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:36:06.984346  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:06.984357  232335 round_trippers.go:580]     Audit-Id: 53659b40-47db-4bb1-a5fb-8d728194dd9e
	I1030 23:36:06.984366  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:06.984376  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:06.984382  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:06.984395  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:06.984409  232335 round_trippers.go:580]     Content-Length: 264
	I1030 23:36:06.984416  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:06 GMT
	I1030 23:36:06.984440  232335 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1030 23:36:06.984505  232335 api_server.go:141] control plane version: v1.28.3
	I1030 23:36:06.984519  232335 api_server.go:131] duration metric: took 10.08673ms to wait for apiserver health ...
	I1030 23:36:06.984528  232335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1030 23:36:07.152860  232335 request.go:629] Waited for 168.2469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:36:07.152964  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:36:07.152975  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:07.152983  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:07.152990  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:07.157404  232335 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:36:07.157427  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:07.157436  232335 round_trippers.go:580]     Audit-Id: 770c4e63-9b4d-4ca1-9bf3-a1d0d3f4285c
	I1030 23:36:07.157444  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:07.157451  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:07.157491  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:07.157508  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:07.157518  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:07 GMT
	I1030 23:36:07.159677  232335 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"833","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81875 chars]
	I1030 23:36:07.162104  232335 system_pods.go:59] 12 kube-system pods found
	I1030 23:36:07.162125  232335 system_pods.go:61] "coredns-5dd5756b68-6pgvt" [d854be1d-ae4e-420a-9853-253f0258915c] Running
	I1030 23:36:07.162130  232335 system_pods.go:61] "etcd-multinode-370491" [eb24307f-f00b-4406-bb05-b18eafd0eca1] Running
	I1030 23:36:07.162137  232335 system_pods.go:61] "kindnet-76g2q" [6f0bf1cd-7456-4578-acf0-6aa80be9db33] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1030 23:36:07.162149  232335 system_pods.go:61] "kindnet-m45c4" [6e2a0237-6787-4bba-b723-93eaf5ac3005] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1030 23:36:07.162156  232335 system_pods.go:61] "kindnet-m9f5k" [a79ceb52-48df-4240-9edc-05c81bf58f73] Running
	I1030 23:36:07.162161  232335 system_pods.go:61] "kube-apiserver-multinode-370491" [d1874c7c-46ee-42eb-a395-c0d0138b3422] Running
	I1030 23:36:07.162165  232335 system_pods.go:61] "kube-controller-manager-multinode-370491" [4da6c57f-cec4-498b-a390-3fa2f8619a0b] Running
	I1030 23:36:07.162170  232335 system_pods.go:61] "kube-proxy-g9wzd" [9bffc44c-9d7f-4d1c-82e7-f249c53bf452] Running
	I1030 23:36:07.162174  232335 system_pods.go:61] "kube-proxy-tv2b7" [d68314ab-5356-4cd6-a611-f3efd8b2d4e0] Running
	I1030 23:36:07.162178  232335 system_pods.go:61] "kube-proxy-xbsl5" [eb41a78a-bf80-4546-b7d6-423a8c3ad0e1] Running
	I1030 23:36:07.162183  232335 system_pods.go:61] "kube-scheduler-multinode-370491" [b71476bb-1843-4ff9-8639-40ae73b72c8b] Running
	I1030 23:36:07.162187  232335 system_pods.go:61] "storage-provisioner" [6f2bbacd-e138-4f82-961e-76f1daf88ccd] Running
	I1030 23:36:07.162194  232335 system_pods.go:74] duration metric: took 177.659513ms to wait for pod list to return data ...
	I1030 23:36:07.162201  232335 default_sa.go:34] waiting for default service account to be created ...
	I1030 23:36:07.353648  232335 request.go:629] Waited for 191.362149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I1030 23:36:07.353705  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I1030 23:36:07.353709  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:07.353717  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:07.353725  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:07.356184  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:07.356212  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:07.356224  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:07.356232  232335 round_trippers.go:580]     Content-Length: 261
	I1030 23:36:07.356240  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:07 GMT
	I1030 23:36:07.356249  232335 round_trippers.go:580]     Audit-Id: 820f0f60-a3a1-4d84-b924-4a609fe1cfdf
	I1030 23:36:07.356260  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:07.356271  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:07.356281  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:07.356311  232335 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"88ed5dd6-6353-42c8-b32f-dd95ef92c5ee","resourceVersion":"297","creationTimestamp":"2023-10-30T23:25:47Z"}}]}
	I1030 23:36:07.356515  232335 default_sa.go:45] found service account: "default"
	I1030 23:36:07.356536  232335 default_sa.go:55] duration metric: took 194.328565ms for default service account to be created ...
	I1030 23:36:07.356548  232335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1030 23:36:07.552984  232335 request.go:629] Waited for 196.329759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:36:07.553054  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:36:07.553061  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:07.553119  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:07.553132  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:07.557245  232335 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:36:07.557264  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:07.557271  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:07 GMT
	I1030 23:36:07.557277  232335 round_trippers.go:580]     Audit-Id: 72de91fb-12a8-43a7-8044-1ea886c29ef2
	I1030 23:36:07.557282  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:07.557287  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:07.557293  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:07.557298  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:07.558584  232335 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"833","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81875 chars]
	I1030 23:36:07.561152  232335 system_pods.go:86] 12 kube-system pods found
	I1030 23:36:07.561173  232335 system_pods.go:89] "coredns-5dd5756b68-6pgvt" [d854be1d-ae4e-420a-9853-253f0258915c] Running
	I1030 23:36:07.561179  232335 system_pods.go:89] "etcd-multinode-370491" [eb24307f-f00b-4406-bb05-b18eafd0eca1] Running
	I1030 23:36:07.561185  232335 system_pods.go:89] "kindnet-76g2q" [6f0bf1cd-7456-4578-acf0-6aa80be9db33] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1030 23:36:07.561192  232335 system_pods.go:89] "kindnet-m45c4" [6e2a0237-6787-4bba-b723-93eaf5ac3005] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1030 23:36:07.561196  232335 system_pods.go:89] "kindnet-m9f5k" [a79ceb52-48df-4240-9edc-05c81bf58f73] Running
	I1030 23:36:07.561201  232335 system_pods.go:89] "kube-apiserver-multinode-370491" [d1874c7c-46ee-42eb-a395-c0d0138b3422] Running
	I1030 23:36:07.561208  232335 system_pods.go:89] "kube-controller-manager-multinode-370491" [4da6c57f-cec4-498b-a390-3fa2f8619a0b] Running
	I1030 23:36:07.561211  232335 system_pods.go:89] "kube-proxy-g9wzd" [9bffc44c-9d7f-4d1c-82e7-f249c53bf452] Running
	I1030 23:36:07.561215  232335 system_pods.go:89] "kube-proxy-tv2b7" [d68314ab-5356-4cd6-a611-f3efd8b2d4e0] Running
	I1030 23:36:07.561219  232335 system_pods.go:89] "kube-proxy-xbsl5" [eb41a78a-bf80-4546-b7d6-423a8c3ad0e1] Running
	I1030 23:36:07.561223  232335 system_pods.go:89] "kube-scheduler-multinode-370491" [b71476bb-1843-4ff9-8639-40ae73b72c8b] Running
	I1030 23:36:07.561228  232335 system_pods.go:89] "storage-provisioner" [6f2bbacd-e138-4f82-961e-76f1daf88ccd] Running
	I1030 23:36:07.561236  232335 system_pods.go:126] duration metric: took 204.68161ms to wait for k8s-apps to be running ...
	I1030 23:36:07.561245  232335 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 23:36:07.561289  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:36:07.576163  232335 system_svc.go:56] duration metric: took 14.910249ms WaitForService to wait for kubelet.
	I1030 23:36:07.576188  232335 kubeadm.go:581] duration metric: took 10.596927577s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1030 23:36:07.576211  232335 node_conditions.go:102] verifying NodePressure condition ...
	I1030 23:36:07.753692  232335 request.go:629] Waited for 177.378211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I1030 23:36:07.753753  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I1030 23:36:07.753760  232335 round_trippers.go:469] Request Headers:
	I1030 23:36:07.753768  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:36:07.753777  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:36:07.756554  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:36:07.756579  232335 round_trippers.go:577] Response Headers:
	I1030 23:36:07.756600  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:36:07.756614  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:36:07.756627  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:36:07 GMT
	I1030 23:36:07.756637  232335 round_trippers.go:580]     Audit-Id: a8ea327c-6563-438b-98d0-4b155567102a
	I1030 23:36:07.756659  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:36:07.756672  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:36:07.756820  232335 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"825","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 15082 chars]
	I1030 23:36:07.757446  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:36:07.757470  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:36:07.757485  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:36:07.757493  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:36:07.757501  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:36:07.757508  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:36:07.757518  232335 node_conditions.go:105] duration metric: took 181.301165ms to run NodePressure ...
	I1030 23:36:07.757535  232335 start.go:228] waiting for startup goroutines ...
	I1030 23:36:07.757549  232335 start.go:233] waiting for cluster config update ...
	I1030 23:36:07.757562  232335 start.go:242] writing updated cluster config ...
	I1030 23:36:07.758043  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:36:07.758146  232335 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:36:07.761412  232335 out.go:177] * Starting worker node multinode-370491-m02 in cluster multinode-370491
	I1030 23:36:07.762655  232335 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:36:07.762679  232335 cache.go:56] Caching tarball of preloaded images
	I1030 23:36:07.762777  232335 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 23:36:07.762791  232335 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1030 23:36:07.762886  232335 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:36:07.763071  232335 start.go:365] acquiring machines lock for multinode-370491-m02: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:36:07.763132  232335 start.go:369] acquired machines lock for "multinode-370491-m02" in 39.699µs
	I1030 23:36:07.763155  232335 start.go:96] Skipping create...Using existing machine configuration
	I1030 23:36:07.763167  232335 fix.go:54] fixHost starting: m02
	I1030 23:36:07.763476  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:36:07.763523  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:36:07.778205  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I1030 23:36:07.778590  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:36:07.779041  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:36:07.779070  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:36:07.779386  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:36:07.779607  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:36:07.779753  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetState
	I1030 23:36:07.781337  232335 fix.go:102] recreateIfNeeded on multinode-370491-m02: state=Running err=<nil>
	W1030 23:36:07.781353  232335 fix.go:128] unexpected machine state, will restart: <nil>
	I1030 23:36:07.783207  232335 out.go:177] * Updating the running kvm2 "multinode-370491-m02" VM ...
	I1030 23:36:07.784448  232335 machine.go:88] provisioning docker machine ...
	I1030 23:36:07.784463  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:36:07.784669  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetMachineName
	I1030 23:36:07.784833  232335 buildroot.go:166] provisioning hostname "multinode-370491-m02"
	I1030 23:36:07.784854  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetMachineName
	I1030 23:36:07.785002  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:36:07.787182  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:07.787629  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:36:07.787657  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:07.787815  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:36:07.788005  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:36:07.788163  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:36:07.788300  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:36:07.788495  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:36:07.788802  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:36:07.788817  232335 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-370491-m02 && echo "multinode-370491-m02" | sudo tee /etc/hostname
	I1030 23:36:07.941076  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370491-m02
	
	I1030 23:36:07.941113  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:36:07.943806  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:07.944193  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:36:07.944232  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:07.944380  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:36:07.944547  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:36:07.944734  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:36:07.944822  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:36:07.944978  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:36:07.945375  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:36:07.945398  232335 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-370491-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-370491-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-370491-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 23:36:08.077442  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:36:08.077477  232335 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1030 23:36:08.077493  232335 buildroot.go:174] setting up certificates
	I1030 23:36:08.077501  232335 provision.go:83] configureAuth start
	I1030 23:36:08.077510  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetMachineName
	I1030 23:36:08.077762  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetIP
	I1030 23:36:08.080002  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:08.080350  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:36:08.080381  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:08.080590  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:36:08.082714  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:08.083063  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:36:08.083087  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:08.083216  232335 provision.go:138] copyHostCerts
	I1030 23:36:08.083253  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:36:08.083300  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1030 23:36:08.083312  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:36:08.083405  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1030 23:36:08.083496  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:36:08.083520  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1030 23:36:08.083527  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:36:08.083567  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1030 23:36:08.083626  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:36:08.083660  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1030 23:36:08.083667  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:36:08.083705  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1030 23:36:08.083771  232335 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.multinode-370491-m02 san=[192.168.39.85 192.168.39.85 localhost 127.0.0.1 minikube multinode-370491-m02]
	I1030 23:36:08.277118  232335 provision.go:172] copyRemoteCerts
	I1030 23:36:08.277191  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 23:36:08.277225  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:36:08.279754  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:08.280150  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:36:08.280194  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:08.280381  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:36:08.280585  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:36:08.280754  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:36:08.281016  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:36:08.374381  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 23:36:08.374438  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1030 23:36:08.397707  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 23:36:08.397773  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1030 23:36:08.419496  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 23:36:08.419562  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1030 23:36:08.441923  232335 provision.go:86] duration metric: configureAuth took 364.408366ms
	I1030 23:36:08.441950  232335 buildroot.go:189] setting minikube options for container-runtime
	I1030 23:36:08.442161  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:36:08.442234  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:36:08.445439  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:08.445772  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:36:08.445800  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:36:08.445978  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:36:08.446181  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:36:08.446395  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:36:08.446563  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:36:08.446743  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:36:08.447047  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:36:08.447070  232335 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 23:37:38.989206  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 23:37:38.989237  232335 machine.go:91] provisioned docker machine in 1m31.204777128s
	I1030 23:37:38.989250  232335 start.go:300] post-start starting for "multinode-370491-m02" (driver="kvm2")
	I1030 23:37:38.989262  232335 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 23:37:38.989303  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:37:38.989702  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 23:37:38.989748  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:37:38.992631  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:38.993078  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:37:38.993108  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:38.993397  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:37:38.993591  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:37:38.993823  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:37:38.994020  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:37:39.091802  232335 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 23:37:39.096292  232335 command_runner.go:130] > NAME=Buildroot
	I1030 23:37:39.096316  232335 command_runner.go:130] > VERSION=2021.02.12-1-gea8740b-dirty
	I1030 23:37:39.096323  232335 command_runner.go:130] > ID=buildroot
	I1030 23:37:39.096332  232335 command_runner.go:130] > VERSION_ID=2021.02.12
	I1030 23:37:39.096339  232335 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1030 23:37:39.096375  232335 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 23:37:39.096403  232335 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1030 23:37:39.096472  232335 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1030 23:37:39.096565  232335 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1030 23:37:39.096577  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /etc/ssl/certs/2160052.pem
	I1030 23:37:39.096715  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 23:37:39.105581  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:37:39.131064  232335 start.go:303] post-start completed in 141.799925ms
	I1030 23:37:39.131085  232335 fix.go:56] fixHost completed within 1m31.367918622s
	I1030 23:37:39.131114  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:37:39.133886  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:39.134296  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:37:39.134330  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:39.134461  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:37:39.134677  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:37:39.134877  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:37:39.135053  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:37:39.135287  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:37:39.135656  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1030 23:37:39.135676  232335 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1030 23:37:39.273249  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698709059.261558555
	
	I1030 23:37:39.273278  232335 fix.go:206] guest clock: 1698709059.261558555
	I1030 23:37:39.273288  232335 fix.go:219] Guest: 2023-10-30 23:37:39.261558555 +0000 UTC Remote: 2023-10-30 23:37:39.131090574 +0000 UTC m=+448.915116991 (delta=130.467981ms)
	I1030 23:37:39.273311  232335 fix.go:190] guest clock delta is within tolerance: 130.467981ms
	I1030 23:37:39.273319  232335 start.go:83] releasing machines lock for "multinode-370491-m02", held for 1m31.510174651s
	I1030 23:37:39.273351  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:37:39.273658  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetIP
	I1030 23:37:39.276761  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:39.277184  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:37:39.277218  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:39.279443  232335 out.go:177] * Found network options:
	I1030 23:37:39.281008  232335 out.go:177]   - NO_PROXY=192.168.39.231
	W1030 23:37:39.282544  232335 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 23:37:39.282582  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:37:39.283285  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:37:39.283480  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:37:39.283567  232335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 23:37:39.283604  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	W1030 23:37:39.283677  232335 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 23:37:39.283760  232335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 23:37:39.283789  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:37:39.286446  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:39.286707  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:39.286893  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:37:39.286920  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:39.287102  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:37:39.287110  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:37:39.287135  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:39.287319  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:37:39.287324  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:37:39.287540  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:37:39.287543  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:37:39.287735  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:37:39.287744  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:37:39.287917  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:37:39.529472  232335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1030 23:37:39.529471  232335 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1030 23:37:39.535620  232335 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1030 23:37:39.535663  232335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 23:37:39.535725  232335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 23:37:39.544847  232335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1030 23:37:39.544875  232335 start.go:472] detecting cgroup driver to use...
	I1030 23:37:39.544954  232335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 23:37:39.561605  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 23:37:39.575498  232335 docker.go:198] disabling cri-docker service (if available) ...
	I1030 23:37:39.575545  232335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 23:37:39.587630  232335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 23:37:39.599199  232335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 23:37:39.801385  232335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 23:37:39.917046  232335 docker.go:214] disabling docker service ...
	I1030 23:37:39.917126  232335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 23:37:39.930760  232335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 23:37:39.943081  232335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 23:37:40.113142  232335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 23:37:40.269857  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 23:37:40.291584  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 23:37:40.308970  232335 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1030 23:37:40.309364  232335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1030 23:37:40.309432  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:37:40.318643  232335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 23:37:40.318700  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:37:40.327862  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:37:40.336781  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:37:40.346049  232335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 23:37:40.356559  232335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 23:37:40.365230  232335 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1030 23:37:40.365292  232335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 23:37:40.373214  232335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 23:37:40.504799  232335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 23:37:42.859827  232335 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.354989492s)
	I1030 23:37:42.859863  232335 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 23:37:42.859923  232335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 23:37:42.868855  232335 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1030 23:37:42.868879  232335 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1030 23:37:42.868888  232335 command_runner.go:130] > Device: 16h/22d	Inode: 1257        Links: 1
	I1030 23:37:42.868900  232335 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:37:42.868907  232335 command_runner.go:130] > Access: 2023-10-30 23:37:42.755761982 +0000
	I1030 23:37:42.868916  232335 command_runner.go:130] > Modify: 2023-10-30 23:37:42.755761982 +0000
	I1030 23:37:42.868925  232335 command_runner.go:130] > Change: 2023-10-30 23:37:42.755761982 +0000
	I1030 23:37:42.868930  232335 command_runner.go:130] >  Birth: -
	I1030 23:37:42.868974  232335 start.go:540] Will wait 60s for crictl version
	I1030 23:37:42.869035  232335 ssh_runner.go:195] Run: which crictl
	I1030 23:37:42.872509  232335 command_runner.go:130] > /usr/bin/crictl
	I1030 23:37:42.872793  232335 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 23:37:42.925844  232335 command_runner.go:130] > Version:  0.1.0
	I1030 23:37:42.925872  232335 command_runner.go:130] > RuntimeName:  cri-o
	I1030 23:37:42.925880  232335 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1030 23:37:42.925888  232335 command_runner.go:130] > RuntimeApiVersion:  v1
	I1030 23:37:42.925912  232335 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1030 23:37:42.925984  232335 ssh_runner.go:195] Run: crio --version
	I1030 23:37:42.972557  232335 command_runner.go:130] > crio version 1.24.1
	I1030 23:37:42.972582  232335 command_runner.go:130] > Version:          1.24.1
	I1030 23:37:42.972593  232335 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:37:42.972601  232335 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:37:42.972610  232335 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:37:42.972618  232335 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:37:42.972625  232335 command_runner.go:130] > Compiler:         gc
	I1030 23:37:42.972633  232335 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:37:42.972642  232335 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:37:42.972657  232335 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:37:42.972676  232335 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:37:42.972684  232335 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:37:42.973976  232335 ssh_runner.go:195] Run: crio --version
	I1030 23:37:43.020220  232335 command_runner.go:130] > crio version 1.24.1
	I1030 23:37:43.020244  232335 command_runner.go:130] > Version:          1.24.1
	I1030 23:37:43.020255  232335 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:37:43.020262  232335 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:37:43.020272  232335 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:37:43.020280  232335 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:37:43.020287  232335 command_runner.go:130] > Compiler:         gc
	I1030 23:37:43.020295  232335 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:37:43.020304  232335 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:37:43.020318  232335 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:37:43.020326  232335 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:37:43.020332  232335 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:37:43.023705  232335 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1030 23:37:43.025355  232335 out.go:177]   - env NO_PROXY=192.168.39.231
	I1030 23:37:43.026956  232335 main.go:141] libmachine: (multinode-370491-m02) Calling .GetIP
	I1030 23:37:43.029796  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:43.030156  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:37:43.030187  232335 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:37:43.030393  232335 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 23:37:43.034205  232335 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1030 23:37:43.034306  232335 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491 for IP: 192.168.39.85
	I1030 23:37:43.034335  232335 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:37:43.034473  232335 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1030 23:37:43.034509  232335 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1030 23:37:43.034521  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 23:37:43.034533  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 23:37:43.034545  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 23:37:43.034557  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 23:37:43.034612  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1030 23:37:43.034644  232335 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1030 23:37:43.034654  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 23:37:43.034679  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1030 23:37:43.034706  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1030 23:37:43.034728  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1030 23:37:43.034768  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:37:43.034798  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:37:43.034813  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem -> /usr/share/ca-certificates/216005.pem
	I1030 23:37:43.034827  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /usr/share/ca-certificates/2160052.pem
	I1030 23:37:43.035260  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 23:37:43.059383  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 23:37:43.081731  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 23:37:43.103578  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1030 23:37:43.126997  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 23:37:43.149502  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1030 23:37:43.173164  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1030 23:37:43.195222  232335 ssh_runner.go:195] Run: openssl version
	I1030 23:37:43.200459  232335 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1030 23:37:43.200618  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 23:37:43.209710  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:37:43.214152  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:37:43.214173  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:37:43.214229  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:37:43.219273  232335 command_runner.go:130] > b5213941
	I1030 23:37:43.219632  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 23:37:43.227478  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1030 23:37:43.236687  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1030 23:37:43.241059  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:37:43.241089  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:37:43.241129  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1030 23:37:43.246173  232335 command_runner.go:130] > 51391683
	I1030 23:37:43.246413  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1030 23:37:43.254227  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1030 23:37:43.263252  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1030 23:37:43.267550  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:37:43.267572  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:37:43.267612  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1030 23:37:43.272857  232335 command_runner.go:130] > 3ec20f2e
	I1030 23:37:43.272926  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 23:37:43.281054  232335 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1030 23:37:43.284805  232335 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:37:43.285081  232335 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:37:43.285160  232335 ssh_runner.go:195] Run: crio config
	I1030 23:37:43.332421  232335 command_runner.go:130] ! time="2023-10-30 23:37:43.320640336Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1030 23:37:43.332491  232335 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1030 23:37:43.343252  232335 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1030 23:37:43.343271  232335 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1030 23:37:43.343277  232335 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1030 23:37:43.343281  232335 command_runner.go:130] > #
	I1030 23:37:43.343287  232335 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1030 23:37:43.343293  232335 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1030 23:37:43.343299  232335 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1030 23:37:43.343306  232335 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1030 23:37:43.343310  232335 command_runner.go:130] > # reload'.
	I1030 23:37:43.343316  232335 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1030 23:37:43.343325  232335 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1030 23:37:43.343331  232335 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1030 23:37:43.343337  232335 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1030 23:37:43.343344  232335 command_runner.go:130] > [crio]
	I1030 23:37:43.343354  232335 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1030 23:37:43.343365  232335 command_runner.go:130] > # containers images, in this directory.
	I1030 23:37:43.343373  232335 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1030 23:37:43.343418  232335 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1030 23:37:43.343433  232335 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1030 23:37:43.343443  232335 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1030 23:37:43.343451  232335 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1030 23:37:43.343455  232335 command_runner.go:130] > storage_driver = "overlay"
	I1030 23:37:43.343462  232335 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1030 23:37:43.343468  232335 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1030 23:37:43.343474  232335 command_runner.go:130] > storage_option = [
	I1030 23:37:43.343482  232335 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1030 23:37:43.343486  232335 command_runner.go:130] > ]
	I1030 23:37:43.343492  232335 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1030 23:37:43.343500  232335 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1030 23:37:43.343505  232335 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1030 23:37:43.343513  232335 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1030 23:37:43.343523  232335 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1030 23:37:43.343530  232335 command_runner.go:130] > # always happen on a node reboot
	I1030 23:37:43.343535  232335 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1030 23:37:43.343543  232335 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1030 23:37:43.343550  232335 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1030 23:37:43.343561  232335 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1030 23:37:43.343569  232335 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1030 23:37:43.343577  232335 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1030 23:37:43.343587  232335 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1030 23:37:43.343594  232335 command_runner.go:130] > # internal_wipe = true
	I1030 23:37:43.343600  232335 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1030 23:37:43.343608  232335 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1030 23:37:43.343614  232335 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1030 23:37:43.343621  232335 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1030 23:37:43.343643  232335 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1030 23:37:43.343649  232335 command_runner.go:130] > [crio.api]
	I1030 23:37:43.343657  232335 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1030 23:37:43.343664  232335 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1030 23:37:43.343670  232335 command_runner.go:130] > # IP address on which the stream server will listen.
	I1030 23:37:43.343677  232335 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1030 23:37:43.343683  232335 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1030 23:37:43.343691  232335 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1030 23:37:43.343701  232335 command_runner.go:130] > # stream_port = "0"
	I1030 23:37:43.343708  232335 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1030 23:37:43.343713  232335 command_runner.go:130] > # stream_enable_tls = false
	I1030 23:37:43.343725  232335 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1030 23:37:43.343736  232335 command_runner.go:130] > # stream_idle_timeout = ""
	I1030 23:37:43.343750  232335 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1030 23:37:43.343762  232335 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1030 23:37:43.343772  232335 command_runner.go:130] > # minutes.
	I1030 23:37:43.343779  232335 command_runner.go:130] > # stream_tls_cert = ""
	I1030 23:37:43.343792  232335 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1030 23:37:43.343806  232335 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1030 23:37:43.343816  232335 command_runner.go:130] > # stream_tls_key = ""
	I1030 23:37:43.343826  232335 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1030 23:37:43.343835  232335 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1030 23:37:43.343844  232335 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1030 23:37:43.343850  232335 command_runner.go:130] > # stream_tls_ca = ""
	I1030 23:37:43.343858  232335 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:37:43.343864  232335 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1030 23:37:43.343871  232335 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:37:43.343878  232335 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1030 23:37:43.343896  232335 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1030 23:37:43.343904  232335 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1030 23:37:43.343908  232335 command_runner.go:130] > [crio.runtime]
	I1030 23:37:43.343915  232335 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1030 23:37:43.343922  232335 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1030 23:37:43.343929  232335 command_runner.go:130] > # "nofile=1024:2048"
	I1030 23:37:43.343935  232335 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1030 23:37:43.343946  232335 command_runner.go:130] > # default_ulimits = [
	I1030 23:37:43.343952  232335 command_runner.go:130] > # ]
	I1030 23:37:43.343958  232335 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1030 23:37:43.343965  232335 command_runner.go:130] > # no_pivot = false
	I1030 23:37:43.343971  232335 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1030 23:37:43.343982  232335 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1030 23:37:43.343989  232335 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1030 23:37:43.343997  232335 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1030 23:37:43.344002  232335 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1030 23:37:43.344011  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:37:43.344018  232335 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1030 23:37:43.344023  232335 command_runner.go:130] > # Cgroup setting for conmon
	I1030 23:37:43.344031  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1030 23:37:43.344038  232335 command_runner.go:130] > conmon_cgroup = "pod"
	I1030 23:37:43.344044  232335 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1030 23:37:43.344051  232335 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1030 23:37:43.344059  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:37:43.344065  232335 command_runner.go:130] > conmon_env = [
	I1030 23:37:43.344071  232335 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 23:37:43.344077  232335 command_runner.go:130] > ]
	I1030 23:37:43.344083  232335 command_runner.go:130] > # Additional environment variables to set for all the
	I1030 23:37:43.344090  232335 command_runner.go:130] > # containers. These are overridden if set in the
	I1030 23:37:43.344098  232335 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1030 23:37:43.344102  232335 command_runner.go:130] > # default_env = [
	I1030 23:37:43.344110  232335 command_runner.go:130] > # ]
	I1030 23:37:43.344118  232335 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1030 23:37:43.344122  232335 command_runner.go:130] > # selinux = false
	I1030 23:37:43.344129  232335 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1030 23:37:43.344137  232335 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1030 23:37:43.344145  232335 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1030 23:37:43.344151  232335 command_runner.go:130] > # seccomp_profile = ""
	I1030 23:37:43.344157  232335 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1030 23:37:43.344165  232335 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1030 23:37:43.344174  232335 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1030 23:37:43.344181  232335 command_runner.go:130] > # which might increase security.
	I1030 23:37:43.344186  232335 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1030 23:37:43.344194  232335 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1030 23:37:43.344202  232335 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1030 23:37:43.344211  232335 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1030 23:37:43.344219  232335 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1030 23:37:43.344227  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:37:43.344234  232335 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1030 23:37:43.344240  232335 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1030 23:37:43.344246  232335 command_runner.go:130] > # the cgroup blockio controller.
	I1030 23:37:43.344251  232335 command_runner.go:130] > # blockio_config_file = ""
	I1030 23:37:43.344259  232335 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1030 23:37:43.344265  232335 command_runner.go:130] > # irqbalance daemon.
	I1030 23:37:43.344270  232335 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1030 23:37:43.344279  232335 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1030 23:37:43.344285  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:37:43.344291  232335 command_runner.go:130] > # rdt_config_file = ""
	I1030 23:37:43.344297  232335 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1030 23:37:43.344303  232335 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1030 23:37:43.344309  232335 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1030 23:37:43.344316  232335 command_runner.go:130] > # separate_pull_cgroup = ""
	I1030 23:37:43.344322  232335 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1030 23:37:43.344331  232335 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1030 23:37:43.344338  232335 command_runner.go:130] > # will be added.
	I1030 23:37:43.344342  232335 command_runner.go:130] > # default_capabilities = [
	I1030 23:37:43.344356  232335 command_runner.go:130] > # 	"CHOWN",
	I1030 23:37:43.344362  232335 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1030 23:37:43.344367  232335 command_runner.go:130] > # 	"FSETID",
	I1030 23:37:43.344373  232335 command_runner.go:130] > # 	"FOWNER",
	I1030 23:37:43.344377  232335 command_runner.go:130] > # 	"SETGID",
	I1030 23:37:43.344383  232335 command_runner.go:130] > # 	"SETUID",
	I1030 23:37:43.344387  232335 command_runner.go:130] > # 	"SETPCAP",
	I1030 23:37:43.344393  232335 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1030 23:37:43.344397  232335 command_runner.go:130] > # 	"KILL",
	I1030 23:37:43.344405  232335 command_runner.go:130] > # ]
	I1030 23:37:43.344411  232335 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1030 23:37:43.344420  232335 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:37:43.344426  232335 command_runner.go:130] > # default_sysctls = [
	I1030 23:37:43.344430  232335 command_runner.go:130] > # ]
	I1030 23:37:43.344437  232335 command_runner.go:130] > # List of devices on the host that a
	I1030 23:37:43.344443  232335 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1030 23:37:43.344449  232335 command_runner.go:130] > # allowed_devices = [
	I1030 23:37:43.344453  232335 command_runner.go:130] > # 	"/dev/fuse",
	I1030 23:37:43.344461  232335 command_runner.go:130] > # ]
	I1030 23:37:43.344468  232335 command_runner.go:130] > # List of additional devices. specified as
	I1030 23:37:43.344478  232335 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1030 23:37:43.344485  232335 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1030 23:37:43.344506  232335 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:37:43.344513  232335 command_runner.go:130] > # additional_devices = [
	I1030 23:37:43.344517  232335 command_runner.go:130] > # ]
	I1030 23:37:43.344525  232335 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1030 23:37:43.344529  232335 command_runner.go:130] > # cdi_spec_dirs = [
	I1030 23:37:43.344535  232335 command_runner.go:130] > # 	"/etc/cdi",
	I1030 23:37:43.344539  232335 command_runner.go:130] > # 	"/var/run/cdi",
	I1030 23:37:43.344545  232335 command_runner.go:130] > # ]
	I1030 23:37:43.344551  232335 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1030 23:37:43.344559  232335 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1030 23:37:43.344565  232335 command_runner.go:130] > # Defaults to false.
	I1030 23:37:43.344570  232335 command_runner.go:130] > # device_ownership_from_security_context = false
	I1030 23:37:43.344579  232335 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1030 23:37:43.344587  232335 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1030 23:37:43.344596  232335 command_runner.go:130] > # hooks_dir = [
	I1030 23:37:43.344604  232335 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1030 23:37:43.344607  232335 command_runner.go:130] > # ]
	I1030 23:37:43.344614  232335 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1030 23:37:43.344621  232335 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1030 23:37:43.344628  232335 command_runner.go:130] > # its default mounts from the following two files:
	I1030 23:37:43.344632  232335 command_runner.go:130] > #
	I1030 23:37:43.344639  232335 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1030 23:37:43.344648  232335 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1030 23:37:43.344656  232335 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1030 23:37:43.344661  232335 command_runner.go:130] > #
	I1030 23:37:43.344667  232335 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1030 23:37:43.344676  232335 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1030 23:37:43.344684  232335 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1030 23:37:43.344692  232335 command_runner.go:130] > #      only add mounts it finds in this file.
	I1030 23:37:43.344695  232335 command_runner.go:130] > #
	I1030 23:37:43.344703  232335 command_runner.go:130] > # default_mounts_file = ""
	I1030 23:37:43.344708  232335 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1030 23:37:43.344723  232335 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1030 23:37:43.344733  232335 command_runner.go:130] > pids_limit = 1024
	I1030 23:37:43.344743  232335 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1030 23:37:43.344756  232335 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1030 23:37:43.344769  232335 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1030 23:37:43.344785  232335 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1030 23:37:43.344795  232335 command_runner.go:130] > # log_size_max = -1
	I1030 23:37:43.344805  232335 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1030 23:37:43.344815  232335 command_runner.go:130] > # log_to_journald = false
	I1030 23:37:43.344823  232335 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1030 23:37:43.344831  232335 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1030 23:37:43.344836  232335 command_runner.go:130] > # Path to directory for container attach sockets.
	I1030 23:37:43.344844  232335 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1030 23:37:43.344849  232335 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1030 23:37:43.344855  232335 command_runner.go:130] > # bind_mount_prefix = ""
	I1030 23:37:43.344861  232335 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1030 23:37:43.344868  232335 command_runner.go:130] > # read_only = false
	I1030 23:37:43.344874  232335 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1030 23:37:43.344883  232335 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1030 23:37:43.344887  232335 command_runner.go:130] > # live configuration reload.
	I1030 23:37:43.344892  232335 command_runner.go:130] > # log_level = "info"
	I1030 23:37:43.344899  232335 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1030 23:37:43.344906  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:37:43.344913  232335 command_runner.go:130] > # log_filter = ""
	I1030 23:37:43.344919  232335 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1030 23:37:43.344928  232335 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1030 23:37:43.344952  232335 command_runner.go:130] > # separated by comma.
	I1030 23:37:43.344963  232335 command_runner.go:130] > # uid_mappings = ""
	I1030 23:37:43.344973  232335 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1030 23:37:43.344985  232335 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1030 23:37:43.344990  232335 command_runner.go:130] > # separated by comma.
	I1030 23:37:43.344997  232335 command_runner.go:130] > # gid_mappings = ""
	I1030 23:37:43.345004  232335 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1030 23:37:43.345012  232335 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:37:43.345021  232335 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:37:43.345027  232335 command_runner.go:130] > # minimum_mappable_uid = -1
	I1030 23:37:43.345035  232335 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1030 23:37:43.345044  232335 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:37:43.345052  232335 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:37:43.345057  232335 command_runner.go:130] > # minimum_mappable_gid = -1
	I1030 23:37:43.345065  232335 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1030 23:37:43.345073  232335 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1030 23:37:43.345079  232335 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1030 23:37:43.345086  232335 command_runner.go:130] > # ctr_stop_timeout = 30
	I1030 23:37:43.345092  232335 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1030 23:37:43.345100  232335 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1030 23:37:43.345107  232335 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1030 23:37:43.345113  232335 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1030 23:37:43.345122  232335 command_runner.go:130] > drop_infra_ctr = false
	I1030 23:37:43.345130  232335 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1030 23:37:43.345138  232335 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1030 23:37:43.345148  232335 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1030 23:37:43.345154  232335 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1030 23:37:43.345161  232335 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1030 23:37:43.345168  232335 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1030 23:37:43.345175  232335 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1030 23:37:43.345182  232335 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1030 23:37:43.345189  232335 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1030 23:37:43.345195  232335 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 23:37:43.345204  232335 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1030 23:37:43.345214  232335 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1030 23:37:43.345221  232335 command_runner.go:130] > # default_runtime = "runc"
	I1030 23:37:43.345226  232335 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1030 23:37:43.345249  232335 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1030 23:37:43.345265  232335 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1030 23:37:43.345273  232335 command_runner.go:130] > # creation as a file is not desired either.
	I1030 23:37:43.345284  232335 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1030 23:37:43.345292  232335 command_runner.go:130] > # the hostname is being managed dynamically.
	I1030 23:37:43.345299  232335 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1030 23:37:43.345303  232335 command_runner.go:130] > # ]
	I1030 23:37:43.345312  232335 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1030 23:37:43.345321  232335 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1030 23:37:43.345329  232335 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1030 23:37:43.345338  232335 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1030 23:37:43.345343  232335 command_runner.go:130] > #
	I1030 23:37:43.345349  232335 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1030 23:37:43.345356  232335 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1030 23:37:43.345362  232335 command_runner.go:130] > #  runtime_type = "oci"
	I1030 23:37:43.345369  232335 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1030 23:37:43.345376  232335 command_runner.go:130] > #  privileged_without_host_devices = false
	I1030 23:37:43.345381  232335 command_runner.go:130] > #  allowed_annotations = []
	I1030 23:37:43.345387  232335 command_runner.go:130] > # Where:
	I1030 23:37:43.345393  232335 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1030 23:37:43.345401  232335 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1030 23:37:43.345407  232335 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1030 23:37:43.345414  232335 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1030 23:37:43.345420  232335 command_runner.go:130] > #   in $PATH.
	I1030 23:37:43.345426  232335 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1030 23:37:43.345433  232335 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1030 23:37:43.345439  232335 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1030 23:37:43.345446  232335 command_runner.go:130] > #   state.
	I1030 23:37:43.345452  232335 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1030 23:37:43.345461  232335 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1030 23:37:43.345469  232335 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1030 23:37:43.345477  232335 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1030 23:37:43.345485  232335 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1030 23:37:43.345494  232335 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1030 23:37:43.345499  232335 command_runner.go:130] > #   The currently recognized values are:
	I1030 23:37:43.345507  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1030 23:37:43.345517  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1030 23:37:43.345525  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1030 23:37:43.345533  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1030 23:37:43.345543  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1030 23:37:43.345552  232335 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1030 23:37:43.345560  232335 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1030 23:37:43.345569  232335 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1030 23:37:43.345576  232335 command_runner.go:130] > #   should be moved to the container's cgroup
	I1030 23:37:43.345580  232335 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1030 23:37:43.345587  232335 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1030 23:37:43.345591  232335 command_runner.go:130] > runtime_type = "oci"
	I1030 23:37:43.345598  232335 command_runner.go:130] > runtime_root = "/run/runc"
	I1030 23:37:43.345602  232335 command_runner.go:130] > runtime_config_path = ""
	I1030 23:37:43.345609  232335 command_runner.go:130] > monitor_path = ""
	I1030 23:37:43.345615  232335 command_runner.go:130] > monitor_cgroup = ""
	I1030 23:37:43.345622  232335 command_runner.go:130] > monitor_exec_cgroup = ""
	I1030 23:37:43.345628  232335 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1030 23:37:43.345634  232335 command_runner.go:130] > # running containers
	I1030 23:37:43.345639  232335 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1030 23:37:43.345649  232335 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1030 23:37:43.345677  232335 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1030 23:37:43.345694  232335 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1030 23:37:43.345699  232335 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1030 23:37:43.345703  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1030 23:37:43.345708  232335 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1030 23:37:43.345713  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1030 23:37:43.345724  232335 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1030 23:37:43.345735  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1030 23:37:43.345748  232335 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1030 23:37:43.345760  232335 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1030 23:37:43.345770  232335 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1030 23:37:43.345786  232335 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1030 23:37:43.345800  232335 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1030 23:37:43.345812  232335 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1030 23:37:43.345828  232335 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1030 23:37:43.345839  232335 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1030 23:37:43.345847  232335 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1030 23:37:43.345856  232335 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1030 23:37:43.345863  232335 command_runner.go:130] > # Example:
	I1030 23:37:43.345868  232335 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1030 23:37:43.345875  232335 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1030 23:37:43.345883  232335 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1030 23:37:43.345891  232335 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1030 23:37:43.345895  232335 command_runner.go:130] > # cpuset = 0
	I1030 23:37:43.345902  232335 command_runner.go:130] > # cpushares = "0-1"
	I1030 23:37:43.345906  232335 command_runner.go:130] > # Where:
	I1030 23:37:43.345913  232335 command_runner.go:130] > # The workload name is workload-type.
	I1030 23:37:43.345919  232335 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1030 23:37:43.345927  232335 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1030 23:37:43.345935  232335 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1030 23:37:43.345950  232335 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1030 23:37:43.345958  232335 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1030 23:37:43.345964  232335 command_runner.go:130] > # 
	I1030 23:37:43.345970  232335 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1030 23:37:43.345976  232335 command_runner.go:130] > #
	I1030 23:37:43.345982  232335 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1030 23:37:43.345990  232335 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1030 23:37:43.345999  232335 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1030 23:37:43.346006  232335 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1030 23:37:43.346014  232335 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1030 23:37:43.346018  232335 command_runner.go:130] > [crio.image]
	I1030 23:37:43.346024  232335 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1030 23:37:43.346032  232335 command_runner.go:130] > # default_transport = "docker://"
	I1030 23:37:43.346039  232335 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1030 23:37:43.346048  232335 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:37:43.346054  232335 command_runner.go:130] > # global_auth_file = ""
	I1030 23:37:43.346059  232335 command_runner.go:130] > # The image used to instantiate infra containers.
	I1030 23:37:43.346067  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:37:43.346072  232335 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1030 23:37:43.346082  232335 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1030 23:37:43.346090  232335 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:37:43.346097  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:37:43.346103  232335 command_runner.go:130] > # pause_image_auth_file = ""
	I1030 23:37:43.346111  232335 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1030 23:37:43.346120  232335 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1030 23:37:43.346128  232335 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1030 23:37:43.346136  232335 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1030 23:37:43.346143  232335 command_runner.go:130] > # pause_command = "/pause"
	I1030 23:37:43.346149  232335 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1030 23:37:43.346158  232335 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1030 23:37:43.346164  232335 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1030 23:37:43.346173  232335 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1030 23:37:43.346180  232335 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1030 23:37:43.346185  232335 command_runner.go:130] > # signature_policy = ""
	I1030 23:37:43.346191  232335 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1030 23:37:43.346200  232335 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1030 23:37:43.346207  232335 command_runner.go:130] > # changing them here.
	I1030 23:37:43.346212  232335 command_runner.go:130] > # insecure_registries = [
	I1030 23:37:43.346217  232335 command_runner.go:130] > # ]
	I1030 23:37:43.346226  232335 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1030 23:37:43.346234  232335 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1030 23:37:43.346241  232335 command_runner.go:130] > # image_volumes = "mkdir"
	I1030 23:37:43.346246  232335 command_runner.go:130] > # Temporary directory to use for storing big files
	I1030 23:37:43.346253  232335 command_runner.go:130] > # big_files_temporary_dir = ""
	I1030 23:37:43.346259  232335 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1030 23:37:43.346265  232335 command_runner.go:130] > # CNI plugins.
	I1030 23:37:43.346269  232335 command_runner.go:130] > [crio.network]
	I1030 23:37:43.346277  232335 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1030 23:37:43.346285  232335 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1030 23:37:43.346292  232335 command_runner.go:130] > # cni_default_network = ""
	I1030 23:37:43.346301  232335 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1030 23:37:43.346306  232335 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1030 23:37:43.346314  232335 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1030 23:37:43.346320  232335 command_runner.go:130] > # plugin_dirs = [
	I1030 23:37:43.346324  232335 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1030 23:37:43.346329  232335 command_runner.go:130] > # ]
	I1030 23:37:43.346336  232335 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1030 23:37:43.346342  232335 command_runner.go:130] > [crio.metrics]
	I1030 23:37:43.346347  232335 command_runner.go:130] > # Globally enable or disable metrics support.
	I1030 23:37:43.346353  232335 command_runner.go:130] > enable_metrics = true
	I1030 23:37:43.346358  232335 command_runner.go:130] > # Specify enabled metrics collectors.
	I1030 23:37:43.346365  232335 command_runner.go:130] > # Per default all metrics are enabled.
	I1030 23:37:43.346371  232335 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1030 23:37:43.346380  232335 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1030 23:37:43.346388  232335 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1030 23:37:43.346395  232335 command_runner.go:130] > # metrics_collectors = [
	I1030 23:37:43.346399  232335 command_runner.go:130] > # 	"operations",
	I1030 23:37:43.346405  232335 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1030 23:37:43.346409  232335 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1030 23:37:43.346416  232335 command_runner.go:130] > # 	"operations_errors",
	I1030 23:37:43.346420  232335 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1030 23:37:43.346426  232335 command_runner.go:130] > # 	"image_pulls_by_name",
	I1030 23:37:43.346431  232335 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1030 23:37:43.346438  232335 command_runner.go:130] > # 	"image_pulls_failures",
	I1030 23:37:43.346442  232335 command_runner.go:130] > # 	"image_pulls_successes",
	I1030 23:37:43.346449  232335 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1030 23:37:43.346453  232335 command_runner.go:130] > # 	"image_layer_reuse",
	I1030 23:37:43.346459  232335 command_runner.go:130] > # 	"containers_oom_total",
	I1030 23:37:43.346463  232335 command_runner.go:130] > # 	"containers_oom",
	I1030 23:37:43.346468  232335 command_runner.go:130] > # 	"processes_defunct",
	I1030 23:37:43.346473  232335 command_runner.go:130] > # 	"operations_total",
	I1030 23:37:43.346479  232335 command_runner.go:130] > # 	"operations_latency_seconds",
	I1030 23:37:43.346484  232335 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1030 23:37:43.346491  232335 command_runner.go:130] > # 	"operations_errors_total",
	I1030 23:37:43.346495  232335 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1030 23:37:43.346502  232335 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1030 23:37:43.346509  232335 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1030 23:37:43.346513  232335 command_runner.go:130] > # 	"image_pulls_success_total",
	I1030 23:37:43.346518  232335 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1030 23:37:43.346524  232335 command_runner.go:130] > # 	"containers_oom_count_total",
	I1030 23:37:43.346528  232335 command_runner.go:130] > # ]
	I1030 23:37:43.346536  232335 command_runner.go:130] > # The port on which the metrics server will listen.
	I1030 23:37:43.346540  232335 command_runner.go:130] > # metrics_port = 9090
	I1030 23:37:43.346547  232335 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1030 23:37:43.346552  232335 command_runner.go:130] > # metrics_socket = ""
	I1030 23:37:43.346557  232335 command_runner.go:130] > # The certificate for the secure metrics server.
	I1030 23:37:43.346566  232335 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1030 23:37:43.346575  232335 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1030 23:37:43.346582  232335 command_runner.go:130] > # certificate on any modification event.
	I1030 23:37:43.346586  232335 command_runner.go:130] > # metrics_cert = ""
	I1030 23:37:43.346593  232335 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1030 23:37:43.346599  232335 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1030 23:37:43.346603  232335 command_runner.go:130] > # metrics_key = ""
	I1030 23:37:43.346611  232335 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1030 23:37:43.346617  232335 command_runner.go:130] > [crio.tracing]
	I1030 23:37:43.346623  232335 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1030 23:37:43.346629  232335 command_runner.go:130] > # enable_tracing = false
	I1030 23:37:43.346635  232335 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1030 23:37:43.346642  232335 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1030 23:37:43.346647  232335 command_runner.go:130] > # Number of samples to collect per million spans.
	I1030 23:37:43.346653  232335 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1030 23:37:43.346659  232335 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1030 23:37:43.346666  232335 command_runner.go:130] > [crio.stats]
	I1030 23:37:43.346672  232335 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1030 23:37:43.346679  232335 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1030 23:37:43.346684  232335 command_runner.go:130] > # stats_collection_period = 0
	I1030 23:37:43.346758  232335 cni.go:84] Creating CNI manager for ""
	I1030 23:37:43.346771  232335 cni.go:136] 3 nodes found, recommending kindnet
	I1030 23:37:43.346785  232335 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1030 23:37:43.346810  232335 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.85 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-370491 NodeName:multinode-370491-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 23:37:43.346938  232335 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-370491-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 23:37:43.346995  232335 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-370491-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1030 23:37:43.347047  232335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1030 23:37:43.355544  232335 command_runner.go:130] > kubeadm
	I1030 23:37:43.355562  232335 command_runner.go:130] > kubectl
	I1030 23:37:43.355569  232335 command_runner.go:130] > kubelet
	I1030 23:37:43.355592  232335 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 23:37:43.355644  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1030 23:37:43.363504  232335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1030 23:37:43.378993  232335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 23:37:43.394000  232335 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I1030 23:37:43.397426  232335 command_runner.go:130] > 192.168.39.231	control-plane.minikube.internal
	I1030 23:37:43.397489  232335 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:37:43.397798  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:37:43.397921  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:37:43.397987  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:37:43.412916  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1030 23:37:43.413368  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:37:43.413902  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:37:43.413920  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:37:43.414240  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:37:43.414514  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:37:43.414715  232335 start.go:304] JoinCluster: &{Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:37:43.414832  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 23:37:43.414848  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:37:43.417685  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:37:43.418199  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:37:43.418226  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:37:43.418371  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:37:43.418562  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:37:43.418723  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:37:43.418901  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:37:43.593538  232335 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4u7dyf.6a6su8zumakdy61i --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1030 23:37:43.597568  232335 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1030 23:37:43.597622  232335 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:37:43.597910  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:37:43.597950  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:37:43.613607  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I1030 23:37:43.614028  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:37:43.614526  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:37:43.614556  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:37:43.614879  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:37:43.615112  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:37:43.615314  232335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-370491-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1030 23:37:43.615343  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:37:43.618430  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:37:43.618923  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:37:43.618943  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:37:43.619249  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:37:43.619419  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:37:43.619584  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:37:43.619731  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:37:43.827589  232335 command_runner.go:130] > node/multinode-370491-m02 cordoned
	I1030 23:37:46.870580  232335 command_runner.go:130] > pod "busybox-5bc68d56bd-4t8fk" has DeletionTimestamp older than 1 seconds, skipping
	I1030 23:37:46.870606  232335 command_runner.go:130] > node/multinode-370491-m02 drained
	I1030 23:37:46.872288  232335 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1030 23:37:46.872312  232335 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-76g2q, kube-system/kube-proxy-g9wzd
	I1030 23:37:46.872339  232335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-370491-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.256994374s)
	I1030 23:37:46.872359  232335 node.go:108] successfully drained node "m02"
	I1030 23:37:46.872823  232335 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:37:46.873190  232335 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:37:46.873781  232335 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1030 23:37:46.873886  232335 round_trippers.go:463] DELETE https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:37:46.873898  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:46.873909  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:46.873917  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:46.873929  232335 round_trippers.go:473]     Content-Type: application/json
	I1030 23:37:46.887124  232335 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1030 23:37:46.887150  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:46.887160  232335 round_trippers.go:580]     Content-Length: 171
	I1030 23:37:46.887169  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:46 GMT
	I1030 23:37:46.887178  232335 round_trippers.go:580]     Audit-Id: 4fd4fd9f-01ec-4888-a555-0817cac7ea65
	I1030 23:37:46.887187  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:46.887195  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:46.887201  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:46.887206  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:46.887382  232335 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-370491-m02","kind":"nodes","uid":"8cc9a842-79bb-497b-97f8-5db56a045e7e"}}
	I1030 23:37:46.887427  232335 node.go:124] successfully deleted node "m02"
	I1030 23:37:46.887437  232335 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1030 23:37:46.887468  232335 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1030 23:37:46.887496  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4u7dyf.6a6su8zumakdy61i --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-370491-m02"
	I1030 23:37:46.939673  232335 command_runner.go:130] ! W1030 23:37:46.927781    2671 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1030 23:37:46.939727  232335 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1030 23:37:47.068901  232335 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1030 23:37:47.068953  232335 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1030 23:37:47.841059  232335 command_runner.go:130] > [preflight] Running pre-flight checks
	I1030 23:37:47.841092  232335 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1030 23:37:47.841106  232335 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1030 23:37:47.841136  232335 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 23:37:47.841151  232335 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 23:37:47.841176  232335 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1030 23:37:47.841190  232335 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1030 23:37:47.841202  232335 command_runner.go:130] > This node has joined the cluster:
	I1030 23:37:47.841214  232335 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1030 23:37:47.841225  232335 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1030 23:37:47.841238  232335 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1030 23:37:47.841277  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 23:37:48.116934  232335 start.go:306] JoinCluster complete in 4.702212154s
	I1030 23:37:48.116986  232335 cni.go:84] Creating CNI manager for ""
	I1030 23:37:48.116995  232335 cni.go:136] 3 nodes found, recommending kindnet
	I1030 23:37:48.117061  232335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 23:37:48.122527  232335 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1030 23:37:48.122559  232335 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1030 23:37:48.122569  232335 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1030 23:37:48.122579  232335 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:37:48.122588  232335 command_runner.go:130] > Access: 2023-10-30 23:35:21.496527687 +0000
	I1030 23:37:48.122597  232335 command_runner.go:130] > Modify: 2023-10-30 22:33:43.000000000 +0000
	I1030 23:37:48.122605  232335 command_runner.go:130] > Change: 2023-10-30 23:35:19.562527687 +0000
	I1030 23:37:48.122626  232335 command_runner.go:130] >  Birth: -
	I1030 23:37:48.122727  232335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1030 23:37:48.122743  232335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1030 23:37:48.140822  232335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 23:37:48.489581  232335 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1030 23:37:48.497991  232335 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1030 23:37:48.501764  232335 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1030 23:37:48.519491  232335 command_runner.go:130] > daemonset.apps/kindnet configured
	I1030 23:37:48.522903  232335 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:37:48.523243  232335 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:37:48.523667  232335 round_trippers.go:463] GET https://192.168.39.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1030 23:37:48.523684  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.523696  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.523706  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.526104  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:48.526124  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.526134  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.526143  232335 round_trippers.go:580]     Audit-Id: d5ada637-a324-4e7e-bd5f-594a648b0fd3
	I1030 23:37:48.526155  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.526166  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.526174  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.526181  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.526187  232335 round_trippers.go:580]     Content-Length: 291
	I1030 23:37:48.526217  232335 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d25ead-69ff-4f03-b32f-13c215a6d708","resourceVersion":"854","creationTimestamp":"2023-10-30T23:25:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1030 23:37:48.526330  232335 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-370491" context rescaled to 1 replicas
	I1030 23:37:48.526361  232335 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1030 23:37:48.529477  232335 out.go:177] * Verifying Kubernetes components...
	I1030 23:37:48.531007  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:37:48.544106  232335 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:37:48.544431  232335 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:37:48.544672  232335 node_ready.go:35] waiting up to 6m0s for node "multinode-370491-m02" to be "Ready" ...
	I1030 23:37:48.544738  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:37:48.544747  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.544754  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.544760  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.547196  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:48.547212  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.547220  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.547228  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.547237  232335 round_trippers.go:580]     Audit-Id: e91934ec-bbc8-46f8-8c0c-c86ccecae061
	I1030 23:37:48.547244  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.547251  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.547259  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.547333  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"1aac93c1-84bb-464c-b793-174fc3813672","resourceVersion":"1007","creationTimestamp":"2023-10-30T23:37:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1030 23:37:48.547599  232335 node_ready.go:49] node "multinode-370491-m02" has status "Ready":"True"
	I1030 23:37:48.547620  232335 node_ready.go:38] duration metric: took 2.933132ms waiting for node "multinode-370491-m02" to be "Ready" ...
	I1030 23:37:48.547631  232335 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:37:48.547695  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:37:48.547704  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.547716  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.547725  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.551760  232335 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:37:48.551779  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.551785  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.551790  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.551796  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.551801  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.551806  232335 round_trippers.go:580]     Audit-Id: 6c70091c-1eed-4cd2-bb3b-964e286c65e9
	I1030 23:37:48.551811  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.553047  232335 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1011"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"833","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82234 chars]
	I1030 23:37:48.556547  232335 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.556636  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:37:48.556648  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.556660  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.556670  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.559221  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:48.559240  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.559251  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.559260  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.559269  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.559278  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.559285  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.559290  232335 round_trippers.go:580]     Audit-Id: f14a236b-9e54-4c48-aa4b-c29101c3e0f8
	I1030 23:37:48.559449  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"833","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1030 23:37:48.559967  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:37:48.559983  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.559990  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.559996  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.562117  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:48.562131  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.562138  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.562143  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.562148  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.562153  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.562158  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.562164  232335 round_trippers.go:580]     Audit-Id: 56de14a6-b250-4dc5-9073-614674194a9d
	I1030 23:37:48.562512  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:37:48.562777  232335 pod_ready.go:92] pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace has status "Ready":"True"
	I1030 23:37:48.562789  232335 pod_ready.go:81] duration metric: took 6.220666ms waiting for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.562798  232335 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.562837  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:37:48.562845  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.562851  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.562857  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.564465  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:37:48.564482  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.564492  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.564501  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.564510  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.564522  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.564534  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.564546  232335 round_trippers.go:580]     Audit-Id: 70db9c19-fc13-411a-b124-ff2261b91727
	I1030 23:37:48.564673  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"844","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1030 23:37:48.565026  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:37:48.565041  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.565051  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.565060  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.566777  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:37:48.566792  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.566798  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.566804  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.566812  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.566821  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.566833  232335 round_trippers.go:580]     Audit-Id: 3b1aeb7b-d945-452b-be52-b23aed01fe38
	I1030 23:37:48.566846  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.567281  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:37:48.567565  232335 pod_ready.go:92] pod "etcd-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:37:48.567580  232335 pod_ready.go:81] duration metric: took 4.775666ms waiting for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.567600  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.567646  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:37:48.567656  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.567666  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.567676  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.569410  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:37:48.569428  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.569439  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.569447  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.569455  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.569464  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.569476  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.569481  232335 round_trippers.go:580]     Audit-Id: 5784f038-08a3-4678-9f99-75cb0602f947
	I1030 23:37:48.569633  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-370491","namespace":"kube-system","uid":"d1874c7c-46ee-42eb-a395-c0d0138b3422","resourceVersion":"846","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.231:8443","kubernetes.io/config.hash":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.mirror":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.seen":"2023-10-30T23:25:35.493664410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1030 23:37:48.570119  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:37:48.570138  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.570149  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.570158  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.572164  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:37:48.572177  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.572183  232335 round_trippers.go:580]     Audit-Id: 451ed69b-21dc-4771-a870-9ad75bccfb0c
	I1030 23:37:48.572188  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.572194  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.572198  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.572204  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.572208  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.572368  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:37:48.572719  232335 pod_ready.go:92] pod "kube-apiserver-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:37:48.572736  232335 pod_ready.go:81] duration metric: took 5.126184ms waiting for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.572747  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.572795  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-370491
	I1030 23:37:48.572806  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.572817  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.572827  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.575093  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:48.575109  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.575118  232335 round_trippers.go:580]     Audit-Id: e18407bf-ba51-4a08-90cc-5f2a872d7185
	I1030 23:37:48.575127  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.575134  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.575140  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.575144  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.575150  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.575426  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-370491","namespace":"kube-system","uid":"4da6c57f-cec4-498b-a390-3fa2f8619a0b","resourceVersion":"827","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.mirror":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.seen":"2023-10-30T23:25:35.493665415Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1030 23:37:48.575834  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:37:48.575853  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.575860  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.575866  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.577432  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:37:48.577444  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.577449  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.577457  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.577465  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.577473  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.577481  232335 round_trippers.go:580]     Audit-Id: efb02af3-cdcd-47ec-aad7-4babf89b2c93
	I1030 23:37:48.577488  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.577703  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:37:48.577964  232335 pod_ready.go:92] pod "kube-controller-manager-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:37:48.577977  232335 pod_ready.go:81] duration metric: took 5.224358ms waiting for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.577985  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:48.745546  232335 request.go:629] Waited for 167.503306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:37:48.745630  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:37:48.745638  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.745649  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.745662  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.748514  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:48.748545  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.748556  232335 round_trippers.go:580]     Audit-Id: 40af072f-9dee-474d-aa0f-41a6b046157b
	I1030 23:37:48.748564  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.748573  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.748581  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.748590  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.748599  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.748782  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g9wzd","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bffc44c-9d7f-4d1c-82e7-f249c53bf452","resourceVersion":"948","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5881 chars]
	I1030 23:37:48.945459  232335 request.go:629] Waited for 196.205389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:37:48.945519  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:37:48.945524  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:48.945533  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:48.945538  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:48.948227  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:48.948254  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:48.948264  232335 round_trippers.go:580]     Audit-Id: 0244504e-4082-462f-bb85-dec1844673c0
	I1030 23:37:48.948272  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:48.948280  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:48.948288  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:48.948297  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:48.948305  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:48 GMT
	I1030 23:37:48.948444  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"1aac93c1-84bb-464c-b793-174fc3813672","resourceVersion":"1007","creationTimestamp":"2023-10-30T23:37:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1030 23:37:49.145305  232335 request.go:629] Waited for 196.411568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:37:49.145388  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:37:49.145399  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:49.145414  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:49.145426  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:49.148374  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:49.148407  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:49.148419  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:49.148427  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:49.148435  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:49.148442  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:49.148450  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:49 GMT
	I1030 23:37:49.148458  232335 round_trippers.go:580]     Audit-Id: 9e1496cf-7ab6-435a-8cec-492a14dbfb5b
	I1030 23:37:49.148647  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g9wzd","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bffc44c-9d7f-4d1c-82e7-f249c53bf452","resourceVersion":"948","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5881 chars]
	I1030 23:37:49.345566  232335 request.go:629] Waited for 196.349935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:37:49.345626  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:37:49.345631  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:49.345639  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:49.345645  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:49.351377  232335 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 23:37:49.351402  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:49.351409  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:49.351415  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:49.351420  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:49 GMT
	I1030 23:37:49.351431  232335 round_trippers.go:580]     Audit-Id: 77d00b7b-a1d6-4bd2-9e57-be60eb0cd696
	I1030 23:37:49.351436  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:49.351441  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:49.351848  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"1aac93c1-84bb-464c-b793-174fc3813672","resourceVersion":"1007","creationTimestamp":"2023-10-30T23:37:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1030 23:37:49.852955  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:37:49.852984  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:49.852997  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:49.853006  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:49.855832  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:49.855851  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:49.855859  232335 round_trippers.go:580]     Audit-Id: 60f8b6be-7a60-41df-a65a-8742ab71b5ed
	I1030 23:37:49.855864  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:49.855869  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:49.855874  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:49.855879  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:49.855885  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:49 GMT
	I1030 23:37:49.856398  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g9wzd","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bffc44c-9d7f-4d1c-82e7-f249c53bf452","resourceVersion":"1022","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5726 chars]
	I1030 23:37:49.856805  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:37:49.856820  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:49.856829  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:49.856838  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:49.858983  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:49.858998  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:49.859005  232335 round_trippers.go:580]     Audit-Id: ea165744-4223-46d8-bec8-d2a431d1e1aa
	I1030 23:37:49.859011  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:49.859016  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:49.859021  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:49.859026  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:49.859032  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:49 GMT
	I1030 23:37:49.859170  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"1aac93c1-84bb-464c-b793-174fc3813672","resourceVersion":"1007","creationTimestamp":"2023-10-30T23:37:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1030 23:37:49.859401  232335 pod_ready.go:92] pod "kube-proxy-g9wzd" in "kube-system" namespace has status "Ready":"True"
	I1030 23:37:49.859416  232335 pod_ready.go:81] duration metric: took 1.281424929s waiting for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:49.859424  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tv2b7" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:49.945750  232335 request.go:629] Waited for 86.270284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:37:49.945838  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:37:49.945846  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:49.945858  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:49.945874  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:49.948318  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:49.948334  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:49.948344  232335 round_trippers.go:580]     Audit-Id: 8a9144a1-5791-491a-9b11-431646287af1
	I1030 23:37:49.948353  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:49.948360  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:49.948368  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:49.948382  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:49.948396  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:49 GMT
	I1030 23:37:49.948560  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tv2b7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68314ab-5356-4cd6-a611-f3efd8b2d4e0","resourceVersion":"685","creationTimestamp":"2023-10-30T23:27:17Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1030 23:37:50.145499  232335 request.go:629] Waited for 196.394818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:37:50.145571  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:37:50.145577  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:50.145587  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:50.145599  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:50.148036  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:50.148061  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:50.148073  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:50 GMT
	I1030 23:37:50.148079  232335 round_trippers.go:580]     Audit-Id: ccecba70-518c-49cc-bed5-b947cd813ece
	I1030 23:37:50.148084  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:50.148089  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:50.148094  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:50.148102  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:50.148275  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m03","uid":"5868a069-28a9-411e-b010-48ecb6a9e16b","resourceVersion":"705","creationTimestamp":"2023-10-30T23:27:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:27:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1030 23:37:50.148548  232335 pod_ready.go:92] pod "kube-proxy-tv2b7" in "kube-system" namespace has status "Ready":"True"
	I1030 23:37:50.148565  232335 pod_ready.go:81] duration metric: took 289.134637ms waiting for pod "kube-proxy-tv2b7" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:50.148578  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:50.344878  232335 request.go:629] Waited for 196.229291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:37:50.344986  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:37:50.344995  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:50.345007  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:50.345017  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:50.347924  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:50.347950  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:50.347960  232335 round_trippers.go:580]     Audit-Id: b800e72a-e38e-437a-aab0-4ddd015f779f
	I1030 23:37:50.347969  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:50.347978  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:50.347986  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:50.347991  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:50.347996  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:50 GMT
	I1030 23:37:50.348309  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xbsl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1","resourceVersion":"760","creationTimestamp":"2023-10-30T23:25:47Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1030 23:37:50.545608  232335 request.go:629] Waited for 196.886849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:37:50.545695  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:37:50.545702  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:50.545712  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:50.545720  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:50.548644  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:50.548668  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:50.548679  232335 round_trippers.go:580]     Audit-Id: 15ac56e8-40e7-4217-8758-8715ad3fcff4
	I1030 23:37:50.548688  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:50.548697  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:50.548707  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:50.548718  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:50.548736  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:50 GMT
	I1030 23:37:50.549019  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:37:50.549409  232335 pod_ready.go:92] pod "kube-proxy-xbsl5" in "kube-system" namespace has status "Ready":"True"
	I1030 23:37:50.549429  232335 pod_ready.go:81] duration metric: took 400.842265ms waiting for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:50.549441  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:50.744854  232335 request.go:629] Waited for 195.31692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:37:50.744924  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:37:50.744930  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:50.744961  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:50.744968  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:50.747730  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:50.747753  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:50.747763  232335 round_trippers.go:580]     Audit-Id: c27461f9-caae-4002-b7b5-64857a2d4d58
	I1030 23:37:50.747771  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:50.747778  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:50.747786  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:50.747793  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:50.747802  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:50 GMT
	I1030 23:37:50.748275  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-370491","namespace":"kube-system","uid":"b71476bb-1843-4ff9-8639-40ae73b72c8b","resourceVersion":"855","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.mirror":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.seen":"2023-10-30T23:25:35.493666103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1030 23:37:50.945010  232335 request.go:629] Waited for 196.315619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:37:50.945105  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:37:50.945112  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:50.945125  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:50.945137  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:50.947490  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:37:50.947516  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:50.947538  232335 round_trippers.go:580]     Audit-Id: a2484701-d31f-47e7-8e0d-2fbf130e645c
	I1030 23:37:50.947548  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:50.947555  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:50.947567  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:50.947575  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:50.947583  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:50 GMT
	I1030 23:37:50.947758  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:37:50.948151  232335 pod_ready.go:92] pod "kube-scheduler-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:37:50.948175  232335 pod_ready.go:81] duration metric: took 398.725432ms waiting for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:37:50.948191  232335 pod_ready.go:38] duration metric: took 2.400545378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:37:50.948215  232335 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 23:37:50.948277  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:37:50.961758  232335 system_svc.go:56] duration metric: took 13.539546ms WaitForService to wait for kubelet.
	I1030 23:37:50.961777  232335 kubeadm.go:581] duration metric: took 2.435395917s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1030 23:37:50.961795  232335 node_conditions.go:102] verifying NodePressure condition ...
	I1030 23:37:51.145244  232335 request.go:629] Waited for 183.367505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I1030 23:37:51.145306  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I1030 23:37:51.145312  232335 round_trippers.go:469] Request Headers:
	I1030 23:37:51.145320  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:37:51.145326  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:37:51.148393  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:37:51.148419  232335 round_trippers.go:577] Response Headers:
	I1030 23:37:51.148428  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:37:51.148435  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:37:51.148443  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:37:51.148450  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:37:51 GMT
	I1030 23:37:51.148457  232335 round_trippers.go:580]     Audit-Id: 53e6415d-86eb-4b39-8673-785a0989ec33
	I1030 23:37:51.148465  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:37:51.148723  232335 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1026"},"items":[{"metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15112 chars]
	I1030 23:37:51.149357  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:37:51.149378  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:37:51.149388  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:37:51.149392  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:37:51.149395  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:37:51.149399  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:37:51.149402  232335 node_conditions.go:105] duration metric: took 187.604082ms to run NodePressure ...
	I1030 23:37:51.149413  232335 start.go:228] waiting for startup goroutines ...
	I1030 23:37:51.149448  232335 start.go:242] writing updated cluster config ...
	I1030 23:37:51.149851  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:37:51.149929  232335 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:37:51.152542  232335 out.go:177] * Starting worker node multinode-370491-m03 in cluster multinode-370491
	I1030 23:37:51.153958  232335 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:37:51.153985  232335 cache.go:56] Caching tarball of preloaded images
	I1030 23:37:51.154087  232335 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 23:37:51.154098  232335 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1030 23:37:51.154184  232335 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/config.json ...
	I1030 23:37:51.154339  232335 start.go:365] acquiring machines lock for multinode-370491-m03: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:37:51.154380  232335 start.go:369] acquired machines lock for "multinode-370491-m03" in 21.817µs
	I1030 23:37:51.154393  232335 start.go:96] Skipping create...Using existing machine configuration
	I1030 23:37:51.154400  232335 fix.go:54] fixHost starting: m03
	I1030 23:37:51.154660  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:37:51.154704  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:37:51.169838  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I1030 23:37:51.170339  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:37:51.170826  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:37:51.170863  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:37:51.171233  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:37:51.171399  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .DriverName
	I1030 23:37:51.171512  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetState
	I1030 23:37:51.173327  232335 fix.go:102] recreateIfNeeded on multinode-370491-m03: state=Running err=<nil>
	W1030 23:37:51.173345  232335 fix.go:128] unexpected machine state, will restart: <nil>
	I1030 23:37:51.175415  232335 out.go:177] * Updating the running kvm2 "multinode-370491-m03" VM ...
	I1030 23:37:51.176779  232335 machine.go:88] provisioning docker machine ...
	I1030 23:37:51.176800  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .DriverName
	I1030 23:37:51.177011  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetMachineName
	I1030 23:37:51.177211  232335 buildroot.go:166] provisioning hostname "multinode-370491-m03"
	I1030 23:37:51.177230  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetMachineName
	I1030 23:37:51.177391  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	I1030 23:37:51.179817  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.180241  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:37:51.180274  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.180408  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHPort
	I1030 23:37:51.180601  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:37:51.180778  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:37:51.180952  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHUsername
	I1030 23:37:51.181120  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:37:51.181495  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1030 23:37:51.181512  232335 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-370491-m03 && echo "multinode-370491-m03" | sudo tee /etc/hostname
	I1030 23:37:51.340255  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370491-m03
	
	I1030 23:37:51.340292  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	I1030 23:37:51.343481  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.343919  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:37:51.343954  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.344185  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHPort
	I1030 23:37:51.344383  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:37:51.344572  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:37:51.344671  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHUsername
	I1030 23:37:51.344814  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:37:51.345171  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1030 23:37:51.345189  232335 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-370491-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-370491-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-370491-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 23:37:51.478006  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:37:51.478043  232335 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1030 23:37:51.478072  232335 buildroot.go:174] setting up certificates
	I1030 23:37:51.478083  232335 provision.go:83] configureAuth start
	I1030 23:37:51.478109  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetMachineName
	I1030 23:37:51.478453  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetIP
	I1030 23:37:51.481556  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.481996  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:37:51.482031  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.482211  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	I1030 23:37:51.484361  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.484772  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:37:51.484802  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.484991  232335 provision.go:138] copyHostCerts
	I1030 23:37:51.485026  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:37:51.485058  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1030 23:37:51.485068  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:37:51.485134  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1030 23:37:51.485206  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:37:51.485223  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1030 23:37:51.485230  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:37:51.485263  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1030 23:37:51.485305  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:37:51.485324  232335 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1030 23:37:51.485330  232335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:37:51.485350  232335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1030 23:37:51.485396  232335 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.multinode-370491-m03 san=[192.168.39.108 192.168.39.108 localhost 127.0.0.1 minikube multinode-370491-m03]
	I1030 23:37:51.762015  232335 provision.go:172] copyRemoteCerts
	I1030 23:37:51.762086  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 23:37:51.762127  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	I1030 23:37:51.765112  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.765542  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:37:51.765588  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.765749  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHPort
	I1030 23:37:51.765965  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:37:51.766240  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHUsername
	I1030 23:37:51.766424  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m03/id_rsa Username:docker}
	I1030 23:37:51.863326  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1030 23:37:51.863424  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1030 23:37:51.887123  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1030 23:37:51.887193  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1030 23:37:51.910688  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1030 23:37:51.910769  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 23:37:51.934074  232335 provision.go:86] duration metric: configureAuth took 455.962253ms
	I1030 23:37:51.934115  232335 buildroot.go:189] setting minikube options for container-runtime
	I1030 23:37:51.934387  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:37:51.934478  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	I1030 23:37:51.937072  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.937482  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:37:51.937512  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:37:51.937729  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHPort
	I1030 23:37:51.937954  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:37:51.938112  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:37:51.938293  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHUsername
	I1030 23:37:51.938499  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:37:51.938959  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1030 23:37:51.938984  232335 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 23:39:22.437717  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 23:39:22.437763  232335 machine.go:91] provisioned docker machine in 1m31.260966887s
	I1030 23:39:22.437778  232335 start.go:300] post-start starting for "multinode-370491-m03" (driver="kvm2")
	I1030 23:39:22.437793  232335 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 23:39:22.437814  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .DriverName
	I1030 23:39:22.438284  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 23:39:22.438319  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	I1030 23:39:22.441189  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.441688  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:39:22.441728  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.441840  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHPort
	I1030 23:39:22.442037  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:39:22.442213  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHUsername
	I1030 23:39:22.442368  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m03/id_rsa Username:docker}
	I1030 23:39:22.540376  232335 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 23:39:22.544539  232335 command_runner.go:130] > NAME=Buildroot
	I1030 23:39:22.544559  232335 command_runner.go:130] > VERSION=2021.02.12-1-gea8740b-dirty
	I1030 23:39:22.544563  232335 command_runner.go:130] > ID=buildroot
	I1030 23:39:22.544568  232335 command_runner.go:130] > VERSION_ID=2021.02.12
	I1030 23:39:22.544573  232335 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1030 23:39:22.544806  232335 info.go:137] Remote host: Buildroot 2021.02.12
	I1030 23:39:22.544827  232335 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1030 23:39:22.544902  232335 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1030 23:39:22.545004  232335 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1030 23:39:22.545018  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /etc/ssl/certs/2160052.pem
	I1030 23:39:22.545098  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 23:39:22.554237  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:39:22.576071  232335 start.go:303] post-start completed in 138.276743ms
	I1030 23:39:22.576093  232335 fix.go:56] fixHost completed within 1m31.421690869s
	I1030 23:39:22.576148  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	I1030 23:39:22.578934  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.579432  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:39:22.579471  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.579660  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHPort
	I1030 23:39:22.579867  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:39:22.580059  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:39:22.580223  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHUsername
	I1030 23:39:22.580492  232335 main.go:141] libmachine: Using SSH client type: native
	I1030 23:39:22.580834  232335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1030 23:39:22.580845  232335 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1030 23:39:22.713551  232335 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698709162.705292787
	
	I1030 23:39:22.713576  232335 fix.go:206] guest clock: 1698709162.705292787
	I1030 23:39:22.713586  232335 fix.go:219] Guest: 2023-10-30 23:39:22.705292787 +0000 UTC Remote: 2023-10-30 23:39:22.576124664 +0000 UTC m=+552.360151086 (delta=129.168123ms)
	I1030 23:39:22.713608  232335 fix.go:190] guest clock delta is within tolerance: 129.168123ms
	I1030 23:39:22.713617  232335 start.go:83] releasing machines lock for "multinode-370491-m03", held for 1m31.559227068s
	I1030 23:39:22.713672  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .DriverName
	I1030 23:39:22.713933  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetIP
	I1030 23:39:22.716554  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.716918  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:39:22.716964  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.718983  232335 out.go:177] * Found network options:
	I1030 23:39:22.720366  232335 out.go:177]   - NO_PROXY=192.168.39.231,192.168.39.85
	W1030 23:39:22.721706  232335 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 23:39:22.721726  232335 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 23:39:22.721739  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .DriverName
	I1030 23:39:22.722267  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .DriverName
	I1030 23:39:22.722442  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .DriverName
	I1030 23:39:22.722542  232335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 23:39:22.722590  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	W1030 23:39:22.722601  232335 proxy.go:119] fail to check proxy env: Error ip not in block
	W1030 23:39:22.722622  232335 proxy.go:119] fail to check proxy env: Error ip not in block
	I1030 23:39:22.722693  232335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 23:39:22.722713  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHHostname
	I1030 23:39:22.725296  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.725590  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.725700  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:39:22.725736  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.725911  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHPort
	I1030 23:39:22.726102  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:39:22.726129  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:22.726147  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:39:22.726233  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHPort
	I1030 23:39:22.726331  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHUsername
	I1030 23:39:22.726407  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHKeyPath
	I1030 23:39:22.726472  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m03/id_rsa Username:docker}
	I1030 23:39:22.726505  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetSSHUsername
	I1030 23:39:22.726603  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m03/id_rsa Username:docker}
	I1030 23:39:22.853265  232335 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1030 23:39:22.978094  232335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1030 23:39:22.984034  232335 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1030 23:39:22.984078  232335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 23:39:22.984129  232335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 23:39:22.993353  232335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1030 23:39:22.993382  232335 start.go:472] detecting cgroup driver to use...
	I1030 23:39:22.993452  232335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 23:39:23.008350  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 23:39:23.021250  232335 docker.go:198] disabling cri-docker service (if available) ...
	I1030 23:39:23.021327  232335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 23:39:23.037002  232335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 23:39:23.050577  232335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1030 23:39:23.184817  232335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 23:39:23.317300  232335 docker.go:214] disabling docker service ...
	I1030 23:39:23.317364  232335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 23:39:23.347704  232335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 23:39:23.366187  232335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 23:39:23.517531  232335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 23:39:23.664363  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 23:39:23.678820  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 23:39:23.696469  232335 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1030 23:39:23.696870  232335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1030 23:39:23.696961  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:39:23.707304  232335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1030 23:39:23.707384  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:39:23.716951  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:39:23.726219  232335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:39:23.735797  232335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1030 23:39:23.746058  232335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1030 23:39:23.759813  232335 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1030 23:39:23.759918  232335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1030 23:39:23.768672  232335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1030 23:39:23.901865  232335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1030 23:39:26.682869  232335 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.780957568s)
	I1030 23:39:26.682912  232335 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1030 23:39:26.682978  232335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1030 23:39:26.688148  232335 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1030 23:39:26.688176  232335 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1030 23:39:26.688187  232335 command_runner.go:130] > Device: 16h/22d	Inode: 1207        Links: 1
	I1030 23:39:26.688211  232335 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:39:26.688221  232335 command_runner.go:130] > Access: 2023-10-30 23:39:26.585896462 +0000
	I1030 23:39:26.688230  232335 command_runner.go:130] > Modify: 2023-10-30 23:39:26.585896462 +0000
	I1030 23:39:26.688242  232335 command_runner.go:130] > Change: 2023-10-30 23:39:26.585896462 +0000
	I1030 23:39:26.688249  232335 command_runner.go:130] >  Birth: -
	I1030 23:39:26.688274  232335 start.go:540] Will wait 60s for crictl version
	I1030 23:39:26.688324  232335 ssh_runner.go:195] Run: which crictl
	I1030 23:39:26.692850  232335 command_runner.go:130] > /usr/bin/crictl
	I1030 23:39:26.692992  232335 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1030 23:39:26.731281  232335 command_runner.go:130] > Version:  0.1.0
	I1030 23:39:26.731306  232335 command_runner.go:130] > RuntimeName:  cri-o
	I1030 23:39:26.731311  232335 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1030 23:39:26.731316  232335 command_runner.go:130] > RuntimeApiVersion:  v1
	I1030 23:39:26.733511  232335 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1030 23:39:26.733599  232335 ssh_runner.go:195] Run: crio --version
	I1030 23:39:26.783485  232335 command_runner.go:130] > crio version 1.24.1
	I1030 23:39:26.783513  232335 command_runner.go:130] > Version:          1.24.1
	I1030 23:39:26.783521  232335 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:39:26.783528  232335 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:39:26.783538  232335 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:39:26.783546  232335 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:39:26.783554  232335 command_runner.go:130] > Compiler:         gc
	I1030 23:39:26.783561  232335 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:39:26.783573  232335 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:39:26.783585  232335 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:39:26.783591  232335 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:39:26.783596  232335 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:39:26.784999  232335 ssh_runner.go:195] Run: crio --version
	I1030 23:39:26.836476  232335 command_runner.go:130] > crio version 1.24.1
	I1030 23:39:26.836497  232335 command_runner.go:130] > Version:          1.24.1
	I1030 23:39:26.836504  232335 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1030 23:39:26.836509  232335 command_runner.go:130] > GitTreeState:     dirty
	I1030 23:39:26.836515  232335 command_runner.go:130] > BuildDate:        2023-10-30T22:24:56Z
	I1030 23:39:26.836519  232335 command_runner.go:130] > GoVersion:        go1.19.9
	I1030 23:39:26.836524  232335 command_runner.go:130] > Compiler:         gc
	I1030 23:39:26.836528  232335 command_runner.go:130] > Platform:         linux/amd64
	I1030 23:39:26.836533  232335 command_runner.go:130] > Linkmode:         dynamic
	I1030 23:39:26.836545  232335 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1030 23:39:26.836551  232335 command_runner.go:130] > SeccompEnabled:   true
	I1030 23:39:26.836558  232335 command_runner.go:130] > AppArmorEnabled:  false
	I1030 23:39:26.839601  232335 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1030 23:39:26.840932  232335 out.go:177]   - env NO_PROXY=192.168.39.231
	I1030 23:39:26.842182  232335 out.go:177]   - env NO_PROXY=192.168.39.231,192.168.39.85
	I1030 23:39:26.843351  232335 main.go:141] libmachine: (multinode-370491-m03) Calling .GetIP
	I1030 23:39:26.846114  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:26.846547  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:87:71", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:27:52 +0000 UTC Type:0 Mac:52:54:00:26:87:71 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-370491-m03 Clientid:01:52:54:00:26:87:71}
	I1030 23:39:26.846581  232335 main.go:141] libmachine: (multinode-370491-m03) DBG | domain multinode-370491-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:26:87:71 in network mk-multinode-370491
	I1030 23:39:26.846804  232335 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1030 23:39:26.850721  232335 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1030 23:39:26.850890  232335 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491 for IP: 192.168.39.108
	I1030 23:39:26.850923  232335 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1030 23:39:26.851089  232335 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1030 23:39:26.851150  232335 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1030 23:39:26.851169  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1030 23:39:26.851190  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1030 23:39:26.851208  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1030 23:39:26.851226  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1030 23:39:26.851298  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1030 23:39:26.851341  232335 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1030 23:39:26.851361  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1030 23:39:26.851400  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1030 23:39:26.851435  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1030 23:39:26.851474  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1030 23:39:26.851531  232335 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:39:26.851564  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem -> /usr/share/ca-certificates/216005.pem
	I1030 23:39:26.851578  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> /usr/share/ca-certificates/2160052.pem
	I1030 23:39:26.851589  232335 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:39:26.851994  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1030 23:39:26.876063  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1030 23:39:26.900038  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1030 23:39:26.924966  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1030 23:39:26.947926  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1030 23:39:26.970550  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1030 23:39:26.992883  232335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1030 23:39:27.014345  232335 ssh_runner.go:195] Run: openssl version
	I1030 23:39:27.019429  232335 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1030 23:39:27.019806  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1030 23:39:27.028884  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1030 23:39:27.033336  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:39:27.033603  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1030 23:39:27.033663  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1030 23:39:27.038834  232335 command_runner.go:130] > 51391683
	I1030 23:39:27.039129  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1030 23:39:27.047223  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1030 23:39:27.056192  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1030 23:39:27.060135  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:39:27.060304  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1030 23:39:27.060352  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1030 23:39:27.065323  232335 command_runner.go:130] > 3ec20f2e
	I1030 23:39:27.065641  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1030 23:39:27.073397  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1030 23:39:27.083589  232335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:39:27.088005  232335 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:39:27.088031  232335 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:39:27.088069  232335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1030 23:39:27.092917  232335 command_runner.go:130] > b5213941
	I1030 23:39:27.093241  232335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1030 23:39:27.101562  232335 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1030 23:39:27.105384  232335 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:39:27.105693  232335 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1030 23:39:27.105770  232335 ssh_runner.go:195] Run: crio config
	I1030 23:39:27.162603  232335 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1030 23:39:27.162630  232335 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1030 23:39:27.162641  232335 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1030 23:39:27.162647  232335 command_runner.go:130] > #
	I1030 23:39:27.162660  232335 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1030 23:39:27.162669  232335 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1030 23:39:27.162684  232335 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1030 23:39:27.162699  232335 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1030 23:39:27.162711  232335 command_runner.go:130] > # reload'.
	I1030 23:39:27.162723  232335 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1030 23:39:27.162733  232335 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1030 23:39:27.162747  232335 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1030 23:39:27.162760  232335 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1030 23:39:27.162766  232335 command_runner.go:130] > [crio]
	I1030 23:39:27.162777  232335 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1030 23:39:27.162785  232335 command_runner.go:130] > # containers images, in this directory.
	I1030 23:39:27.162797  232335 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1030 23:39:27.162814  232335 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1030 23:39:27.162827  232335 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1030 23:39:27.162840  232335 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1030 23:39:27.162849  232335 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1030 23:39:27.162854  232335 command_runner.go:130] > storage_driver = "overlay"
	I1030 23:39:27.162862  232335 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1030 23:39:27.162868  232335 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1030 23:39:27.162874  232335 command_runner.go:130] > storage_option = [
	I1030 23:39:27.162906  232335 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1030 23:39:27.162916  232335 command_runner.go:130] > ]
	I1030 23:39:27.162926  232335 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1030 23:39:27.162936  232335 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1030 23:39:27.162944  232335 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1030 23:39:27.162953  232335 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1030 23:39:27.162963  232335 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1030 23:39:27.162971  232335 command_runner.go:130] > # always happen on a node reboot
	I1030 23:39:27.162981  232335 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1030 23:39:27.162992  232335 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1030 23:39:27.163007  232335 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1030 23:39:27.163025  232335 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1030 23:39:27.163037  232335 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1030 23:39:27.163054  232335 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1030 23:39:27.163071  232335 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1030 23:39:27.163083  232335 command_runner.go:130] > # internal_wipe = true
	I1030 23:39:27.163094  232335 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1030 23:39:27.163108  232335 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1030 23:39:27.163122  232335 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1030 23:39:27.163135  232335 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1030 23:39:27.163149  232335 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1030 23:39:27.163159  232335 command_runner.go:130] > [crio.api]
	I1030 23:39:27.163172  232335 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1030 23:39:27.163185  232335 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1030 23:39:27.163198  232335 command_runner.go:130] > # IP address on which the stream server will listen.
	I1030 23:39:27.163210  232335 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1030 23:39:27.163226  232335 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1030 23:39:27.163238  232335 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1030 23:39:27.163249  232335 command_runner.go:130] > # stream_port = "0"
	I1030 23:39:27.163261  232335 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1030 23:39:27.163272  232335 command_runner.go:130] > # stream_enable_tls = false
	I1030 23:39:27.163287  232335 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1030 23:39:27.163298  232335 command_runner.go:130] > # stream_idle_timeout = ""
	I1030 23:39:27.163313  232335 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1030 23:39:27.163327  232335 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1030 23:39:27.163336  232335 command_runner.go:130] > # minutes.
	I1030 23:39:27.163343  232335 command_runner.go:130] > # stream_tls_cert = ""
	I1030 23:39:27.163354  232335 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1030 23:39:27.163467  232335 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1030 23:39:27.163484  232335 command_runner.go:130] > # stream_tls_key = ""
	I1030 23:39:27.163496  232335 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1030 23:39:27.163513  232335 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1030 23:39:27.163527  232335 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1030 23:39:27.163576  232335 command_runner.go:130] > # stream_tls_ca = ""
	I1030 23:39:27.163595  232335 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:39:27.163604  232335 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1030 23:39:27.163619  232335 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1030 23:39:27.163645  232335 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1030 23:39:27.163672  232335 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1030 23:39:27.163686  232335 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1030 23:39:27.163697  232335 command_runner.go:130] > [crio.runtime]
	I1030 23:39:27.163709  232335 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1030 23:39:27.163723  232335 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1030 23:39:27.163735  232335 command_runner.go:130] > # "nofile=1024:2048"
	I1030 23:39:27.163750  232335 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1030 23:39:27.163762  232335 command_runner.go:130] > # default_ulimits = [
	I1030 23:39:27.163771  232335 command_runner.go:130] > # ]
	I1030 23:39:27.163784  232335 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1030 23:39:27.163795  232335 command_runner.go:130] > # no_pivot = false
	I1030 23:39:27.163806  232335 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1030 23:39:27.163822  232335 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1030 23:39:27.163835  232335 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1030 23:39:27.163849  232335 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1030 23:39:27.163863  232335 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1030 23:39:27.163883  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:39:27.163901  232335 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1030 23:39:27.163910  232335 command_runner.go:130] > # Cgroup setting for conmon
	I1030 23:39:27.163923  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1030 23:39:27.163931  232335 command_runner.go:130] > conmon_cgroup = "pod"
	I1030 23:39:27.163945  232335 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1030 23:39:27.163955  232335 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1030 23:39:27.163964  232335 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1030 23:39:27.163976  232335 command_runner.go:130] > conmon_env = [
	I1030 23:39:27.163991  232335 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1030 23:39:27.164002  232335 command_runner.go:130] > ]
	I1030 23:39:27.164016  232335 command_runner.go:130] > # Additional environment variables to set for all the
	I1030 23:39:27.164029  232335 command_runner.go:130] > # containers. These are overridden if set in the
	I1030 23:39:27.164040  232335 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1030 23:39:27.164052  232335 command_runner.go:130] > # default_env = [
	I1030 23:39:27.164060  232335 command_runner.go:130] > # ]
	I1030 23:39:27.164074  232335 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1030 23:39:27.164087  232335 command_runner.go:130] > # selinux = false
	I1030 23:39:27.164102  232335 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1030 23:39:27.164117  232335 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1030 23:39:27.164132  232335 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1030 23:39:27.164146  232335 command_runner.go:130] > # seccomp_profile = ""
	I1030 23:39:27.164161  232335 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1030 23:39:27.164175  232335 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1030 23:39:27.164190  232335 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1030 23:39:27.164202  232335 command_runner.go:130] > # which might increase security.
	I1030 23:39:27.164209  232335 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1030 23:39:27.164220  232335 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1030 23:39:27.164236  232335 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1030 23:39:27.164252  232335 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1030 23:39:27.164267  232335 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1030 23:39:27.164281  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:39:27.164292  232335 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1030 23:39:27.164307  232335 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1030 23:39:27.164315  232335 command_runner.go:130] > # the cgroup blockio controller.
	I1030 23:39:27.164326  232335 command_runner.go:130] > # blockio_config_file = ""
	I1030 23:39:27.164343  232335 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1030 23:39:27.164356  232335 command_runner.go:130] > # irqbalance daemon.
	I1030 23:39:27.164371  232335 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1030 23:39:27.164389  232335 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1030 23:39:27.164400  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:39:27.164405  232335 command_runner.go:130] > # rdt_config_file = ""
	I1030 23:39:27.164415  232335 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1030 23:39:27.164420  232335 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1030 23:39:27.164428  232335 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1030 23:39:27.164433  232335 command_runner.go:130] > # separate_pull_cgroup = ""
	I1030 23:39:27.164440  232335 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1030 23:39:27.164449  232335 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1030 23:39:27.164454  232335 command_runner.go:130] > # will be added.
	I1030 23:39:27.164460  232335 command_runner.go:130] > # default_capabilities = [
	I1030 23:39:27.164464  232335 command_runner.go:130] > # 	"CHOWN",
	I1030 23:39:27.164471  232335 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1030 23:39:27.164475  232335 command_runner.go:130] > # 	"FSETID",
	I1030 23:39:27.164479  232335 command_runner.go:130] > # 	"FOWNER",
	I1030 23:39:27.164483  232335 command_runner.go:130] > # 	"SETGID",
	I1030 23:39:27.164488  232335 command_runner.go:130] > # 	"SETUID",
	I1030 23:39:27.164492  232335 command_runner.go:130] > # 	"SETPCAP",
	I1030 23:39:27.164499  232335 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1030 23:39:27.164505  232335 command_runner.go:130] > # 	"KILL",
	I1030 23:39:27.164509  232335 command_runner.go:130] > # ]
	I1030 23:39:27.164516  232335 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1030 23:39:27.164524  232335 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:39:27.164531  232335 command_runner.go:130] > # default_sysctls = [
	I1030 23:39:27.164541  232335 command_runner.go:130] > # ]
	I1030 23:39:27.164551  232335 command_runner.go:130] > # List of devices on the host that a
	I1030 23:39:27.164595  232335 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1030 23:39:27.164606  232335 command_runner.go:130] > # allowed_devices = [
	I1030 23:39:27.164611  232335 command_runner.go:130] > # 	"/dev/fuse",
	I1030 23:39:27.164615  232335 command_runner.go:130] > # ]
	I1030 23:39:27.164623  232335 command_runner.go:130] > # List of additional devices. specified as
	I1030 23:39:27.164631  232335 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1030 23:39:27.164640  232335 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1030 23:39:27.164675  232335 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1030 23:39:27.164689  232335 command_runner.go:130] > # additional_devices = [
	I1030 23:39:27.164699  232335 command_runner.go:130] > # ]
	I1030 23:39:27.164713  232335 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1030 23:39:27.164723  232335 command_runner.go:130] > # cdi_spec_dirs = [
	I1030 23:39:27.164727  232335 command_runner.go:130] > # 	"/etc/cdi",
	I1030 23:39:27.164734  232335 command_runner.go:130] > # 	"/var/run/cdi",
	I1030 23:39:27.164737  232335 command_runner.go:130] > # ]
	I1030 23:39:27.164744  232335 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1030 23:39:27.164752  232335 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1030 23:39:27.164760  232335 command_runner.go:130] > # Defaults to false.
	I1030 23:39:27.164774  232335 command_runner.go:130] > # device_ownership_from_security_context = false
	I1030 23:39:27.164790  232335 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1030 23:39:27.164805  232335 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1030 23:39:27.164817  232335 command_runner.go:130] > # hooks_dir = [
	I1030 23:39:27.164829  232335 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1030 23:39:27.164841  232335 command_runner.go:130] > # ]
	I1030 23:39:27.164857  232335 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1030 23:39:27.164873  232335 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1030 23:39:27.164903  232335 command_runner.go:130] > # its default mounts from the following two files:
	I1030 23:39:27.164914  232335 command_runner.go:130] > #
	I1030 23:39:27.164926  232335 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1030 23:39:27.164958  232335 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1030 23:39:27.164973  232335 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1030 23:39:27.164983  232335 command_runner.go:130] > #
	I1030 23:39:27.164995  232335 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1030 23:39:27.165012  232335 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1030 23:39:27.165027  232335 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1030 23:39:27.165041  232335 command_runner.go:130] > #      only add mounts it finds in this file.
	I1030 23:39:27.165049  232335 command_runner.go:130] > #
	I1030 23:39:27.165056  232335 command_runner.go:130] > # default_mounts_file = ""
	I1030 23:39:27.165069  232335 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1030 23:39:27.165087  232335 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1030 23:39:27.165099  232335 command_runner.go:130] > pids_limit = 1024
	I1030 23:39:27.165111  232335 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1030 23:39:27.165125  232335 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1030 23:39:27.165137  232335 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1030 23:39:27.165148  232335 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1030 23:39:27.165155  232335 command_runner.go:130] > # log_size_max = -1
	I1030 23:39:27.165162  232335 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1030 23:39:27.165169  232335 command_runner.go:130] > # log_to_journald = false
	I1030 23:39:27.165176  232335 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1030 23:39:27.165184  232335 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1030 23:39:27.165189  232335 command_runner.go:130] > # Path to directory for container attach sockets.
	I1030 23:39:27.165197  232335 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1030 23:39:27.165202  232335 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1030 23:39:27.165210  232335 command_runner.go:130] > # bind_mount_prefix = ""
	I1030 23:39:27.165224  232335 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1030 23:39:27.165236  232335 command_runner.go:130] > # read_only = false
	I1030 23:39:27.165252  232335 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1030 23:39:27.165267  232335 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1030 23:39:27.165280  232335 command_runner.go:130] > # live configuration reload.
	I1030 23:39:27.165289  232335 command_runner.go:130] > # log_level = "info"
	I1030 23:39:27.165299  232335 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1030 23:39:27.165312  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:39:27.165321  232335 command_runner.go:130] > # log_filter = ""
	I1030 23:39:27.165336  232335 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1030 23:39:27.165351  232335 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1030 23:39:27.165363  232335 command_runner.go:130] > # separated by comma.
	I1030 23:39:27.165375  232335 command_runner.go:130] > # uid_mappings = ""
	I1030 23:39:27.165387  232335 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1030 23:39:27.165399  232335 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1030 23:39:27.165411  232335 command_runner.go:130] > # separated by comma.
	I1030 23:39:27.165420  232335 command_runner.go:130] > # gid_mappings = ""
	I1030 23:39:27.165436  232335 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1030 23:39:27.165451  232335 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:39:27.165467  232335 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:39:27.165479  232335 command_runner.go:130] > # minimum_mappable_uid = -1
	I1030 23:39:27.165491  232335 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1030 23:39:27.165505  232335 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1030 23:39:27.165521  232335 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1030 23:39:27.165534  232335 command_runner.go:130] > # minimum_mappable_gid = -1
	I1030 23:39:27.165549  232335 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1030 23:39:27.165566  232335 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1030 23:39:27.165608  232335 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1030 23:39:27.165621  232335 command_runner.go:130] > # ctr_stop_timeout = 30
	I1030 23:39:27.165634  232335 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1030 23:39:27.165649  232335 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1030 23:39:27.165667  232335 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1030 23:39:27.165680  232335 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1030 23:39:27.165691  232335 command_runner.go:130] > drop_infra_ctr = false
	I1030 23:39:27.165699  232335 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1030 23:39:27.165713  232335 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1030 23:39:27.165731  232335 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1030 23:39:27.165743  232335 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1030 23:39:27.165759  232335 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1030 23:39:27.165773  232335 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1030 23:39:27.165785  232335 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1030 23:39:27.165798  232335 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1030 23:39:27.165811  232335 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1030 23:39:27.165826  232335 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1030 23:39:27.165843  232335 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1030 23:39:27.165858  232335 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1030 23:39:27.165871  232335 command_runner.go:130] > # default_runtime = "runc"
	I1030 23:39:27.165889  232335 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1030 23:39:27.165904  232335 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1030 23:39:27.165924  232335 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1030 23:39:27.165938  232335 command_runner.go:130] > # creation as a file is not desired either.
	I1030 23:39:27.165957  232335 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1030 23:39:27.165971  232335 command_runner.go:130] > # the hostname is being managed dynamically.
	I1030 23:39:27.165983  232335 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1030 23:39:27.165991  232335 command_runner.go:130] > # ]
	I1030 23:39:27.166002  232335 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1030 23:39:27.166017  232335 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1030 23:39:27.166034  232335 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1030 23:39:27.166050  232335 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1030 23:39:27.166064  232335 command_runner.go:130] > #
	I1030 23:39:27.166077  232335 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1030 23:39:27.166087  232335 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1030 23:39:27.166101  232335 command_runner.go:130] > #  runtime_type = "oci"
	I1030 23:39:27.166114  232335 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1030 23:39:27.166128  232335 command_runner.go:130] > #  privileged_without_host_devices = false
	I1030 23:39:27.166141  232335 command_runner.go:130] > #  allowed_annotations = []
	I1030 23:39:27.166152  232335 command_runner.go:130] > # Where:
	I1030 23:39:27.166166  232335 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1030 23:39:27.166180  232335 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1030 23:39:27.166193  232335 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1030 23:39:27.166209  232335 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1030 23:39:27.166220  232335 command_runner.go:130] > #   in $PATH.
	I1030 23:39:27.166233  232335 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1030 23:39:27.166247  232335 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1030 23:39:27.166262  232335 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1030 23:39:27.166273  232335 command_runner.go:130] > #   state.
	I1030 23:39:27.166288  232335 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1030 23:39:27.166302  232335 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1030 23:39:27.166318  232335 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1030 23:39:27.166333  232335 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1030 23:39:27.166349  232335 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1030 23:39:27.166365  232335 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1030 23:39:27.166379  232335 command_runner.go:130] > #   The currently recognized values are:
	I1030 23:39:27.166392  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1030 23:39:27.166409  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1030 23:39:27.166424  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1030 23:39:27.166440  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1030 23:39:27.166458  232335 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1030 23:39:27.166474  232335 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1030 23:39:27.166488  232335 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1030 23:39:27.166501  232335 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1030 23:39:27.166514  232335 command_runner.go:130] > #   should be moved to the container's cgroup
	I1030 23:39:27.166527  232335 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1030 23:39:27.166540  232335 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1030 23:39:27.166552  232335 command_runner.go:130] > runtime_type = "oci"
	I1030 23:39:27.166565  232335 command_runner.go:130] > runtime_root = "/run/runc"
	I1030 23:39:27.166596  232335 command_runner.go:130] > runtime_config_path = ""
	I1030 23:39:27.166609  232335 command_runner.go:130] > monitor_path = ""
	I1030 23:39:27.166619  232335 command_runner.go:130] > monitor_cgroup = ""
	I1030 23:39:27.166631  232335 command_runner.go:130] > monitor_exec_cgroup = ""
	I1030 23:39:27.166647  232335 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1030 23:39:27.166659  232335 command_runner.go:130] > # running containers
	I1030 23:39:27.166671  232335 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1030 23:39:27.166684  232335 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1030 23:39:27.166721  232335 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1030 23:39:27.166736  232335 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1030 23:39:27.166746  232335 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1030 23:39:27.166759  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1030 23:39:27.166772  232335 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1030 23:39:27.166785  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1030 23:39:27.166799  232335 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1030 23:39:27.166809  232335 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1030 23:39:27.166820  232335 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1030 23:39:27.166834  232335 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1030 23:39:27.166845  232335 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1030 23:39:27.166863  232335 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1030 23:39:27.166886  232335 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1030 23:39:27.166901  232335 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1030 23:39:27.166918  232335 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1030 23:39:27.166935  232335 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1030 23:39:27.166949  232335 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1030 23:39:27.166963  232335 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1030 23:39:27.166974  232335 command_runner.go:130] > # Example:
	I1030 23:39:27.166987  232335 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1030 23:39:27.167001  232335 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1030 23:39:27.167011  232335 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1030 23:39:27.167019  232335 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1030 23:39:27.167029  232335 command_runner.go:130] > # cpuset = 0
	I1030 23:39:27.167036  232335 command_runner.go:130] > # cpushares = "0-1"
	I1030 23:39:27.167046  232335 command_runner.go:130] > # Where:
	I1030 23:39:27.167055  232335 command_runner.go:130] > # The workload name is workload-type.
	I1030 23:39:27.167072  232335 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1030 23:39:27.167086  232335 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1030 23:39:27.167101  232335 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1030 23:39:27.167117  232335 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1030 23:39:27.167125  232335 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1030 23:39:27.167132  232335 command_runner.go:130] > # 
	I1030 23:39:27.167138  232335 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1030 23:39:27.167145  232335 command_runner.go:130] > #
	I1030 23:39:27.167151  232335 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1030 23:39:27.167160  232335 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1030 23:39:27.167169  232335 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1030 23:39:27.167178  232335 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1030 23:39:27.167186  232335 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1030 23:39:27.167192  232335 command_runner.go:130] > [crio.image]
	I1030 23:39:27.167199  232335 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1030 23:39:27.167206  232335 command_runner.go:130] > # default_transport = "docker://"
	I1030 23:39:27.167213  232335 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1030 23:39:27.167222  232335 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:39:27.167229  232335 command_runner.go:130] > # global_auth_file = ""
	I1030 23:39:27.167234  232335 command_runner.go:130] > # The image used to instantiate infra containers.
	I1030 23:39:27.167242  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:39:27.167248  232335 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1030 23:39:27.167256  232335 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1030 23:39:27.167265  232335 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1030 23:39:27.167273  232335 command_runner.go:130] > # This option supports live configuration reload.
	I1030 23:39:27.167278  232335 command_runner.go:130] > # pause_image_auth_file = ""
	I1030 23:39:27.167286  232335 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1030 23:39:27.167292  232335 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1030 23:39:27.167299  232335 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1030 23:39:27.167305  232335 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1030 23:39:27.167311  232335 command_runner.go:130] > # pause_command = "/pause"
	I1030 23:39:27.167317  232335 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1030 23:39:27.167327  232335 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1030 23:39:27.167333  232335 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1030 23:39:27.167347  232335 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1030 23:39:27.167355  232335 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1030 23:39:27.167359  232335 command_runner.go:130] > # signature_policy = ""
	I1030 23:39:27.167366  232335 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1030 23:39:27.167372  232335 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1030 23:39:27.167379  232335 command_runner.go:130] > # changing them here.
	I1030 23:39:27.167386  232335 command_runner.go:130] > # insecure_registries = [
	I1030 23:39:27.167392  232335 command_runner.go:130] > # ]
	I1030 23:39:27.167398  232335 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1030 23:39:27.167406  232335 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1030 23:39:27.167410  232335 command_runner.go:130] > # image_volumes = "mkdir"
	I1030 23:39:27.167420  232335 command_runner.go:130] > # Temporary directory to use for storing big files
	I1030 23:39:27.167427  232335 command_runner.go:130] > # big_files_temporary_dir = ""
	I1030 23:39:27.167433  232335 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1030 23:39:27.167440  232335 command_runner.go:130] > # CNI plugins.
	I1030 23:39:27.167444  232335 command_runner.go:130] > [crio.network]
	I1030 23:39:27.167451  232335 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1030 23:39:27.167459  232335 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1030 23:39:27.167466  232335 command_runner.go:130] > # cni_default_network = ""
	I1030 23:39:27.167472  232335 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1030 23:39:27.167479  232335 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1030 23:39:27.167486  232335 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1030 23:39:27.167493  232335 command_runner.go:130] > # plugin_dirs = [
	I1030 23:39:27.167497  232335 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1030 23:39:27.167503  232335 command_runner.go:130] > # ]
	I1030 23:39:27.167510  232335 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1030 23:39:27.167516  232335 command_runner.go:130] > [crio.metrics]
	I1030 23:39:27.167522  232335 command_runner.go:130] > # Globally enable or disable metrics support.
	I1030 23:39:27.167528  232335 command_runner.go:130] > enable_metrics = true
	I1030 23:39:27.167533  232335 command_runner.go:130] > # Specify enabled metrics collectors.
	I1030 23:39:27.167541  232335 command_runner.go:130] > # Per default all metrics are enabled.
	I1030 23:39:27.167548  232335 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1030 23:39:27.167557  232335 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1030 23:39:27.167565  232335 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1030 23:39:27.167572  232335 command_runner.go:130] > # metrics_collectors = [
	I1030 23:39:27.167576  232335 command_runner.go:130] > # 	"operations",
	I1030 23:39:27.167583  232335 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1030 23:39:27.167591  232335 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1030 23:39:27.167595  232335 command_runner.go:130] > # 	"operations_errors",
	I1030 23:39:27.167602  232335 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1030 23:39:27.167606  232335 command_runner.go:130] > # 	"image_pulls_by_name",
	I1030 23:39:27.167614  232335 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1030 23:39:27.167619  232335 command_runner.go:130] > # 	"image_pulls_failures",
	I1030 23:39:27.167626  232335 command_runner.go:130] > # 	"image_pulls_successes",
	I1030 23:39:27.167657  232335 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1030 23:39:27.167673  232335 command_runner.go:130] > # 	"image_layer_reuse",
	I1030 23:39:27.167680  232335 command_runner.go:130] > # 	"containers_oom_total",
	I1030 23:39:27.167684  232335 command_runner.go:130] > # 	"containers_oom",
	I1030 23:39:27.167690  232335 command_runner.go:130] > # 	"processes_defunct",
	I1030 23:39:27.167695  232335 command_runner.go:130] > # 	"operations_total",
	I1030 23:39:27.167703  232335 command_runner.go:130] > # 	"operations_latency_seconds",
	I1030 23:39:27.167708  232335 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1030 23:39:27.167715  232335 command_runner.go:130] > # 	"operations_errors_total",
	I1030 23:39:27.167719  232335 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1030 23:39:27.167727  232335 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1030 23:39:27.167734  232335 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1030 23:39:27.167739  232335 command_runner.go:130] > # 	"image_pulls_success_total",
	I1030 23:39:27.167746  232335 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1030 23:39:27.167754  232335 command_runner.go:130] > # 	"containers_oom_count_total",
	I1030 23:39:27.167758  232335 command_runner.go:130] > # ]
	I1030 23:39:27.167766  232335 command_runner.go:130] > # The port on which the metrics server will listen.
	I1030 23:39:27.167772  232335 command_runner.go:130] > # metrics_port = 9090
	I1030 23:39:27.167778  232335 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1030 23:39:27.167784  232335 command_runner.go:130] > # metrics_socket = ""
	I1030 23:39:27.167789  232335 command_runner.go:130] > # The certificate for the secure metrics server.
	I1030 23:39:27.167798  232335 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1030 23:39:27.167806  232335 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1030 23:39:27.167813  232335 command_runner.go:130] > # certificate on any modification event.
	I1030 23:39:27.167817  232335 command_runner.go:130] > # metrics_cert = ""
	I1030 23:39:27.167825  232335 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1030 23:39:27.167833  232335 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1030 23:39:27.167837  232335 command_runner.go:130] > # metrics_key = ""
	I1030 23:39:27.167845  232335 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1030 23:39:27.167854  232335 command_runner.go:130] > [crio.tracing]
	I1030 23:39:27.167862  232335 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1030 23:39:27.167869  232335 command_runner.go:130] > # enable_tracing = false
	I1030 23:39:27.167875  232335 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1030 23:39:27.167888  232335 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1030 23:39:27.167897  232335 command_runner.go:130] > # Number of samples to collect per million spans.
	I1030 23:39:27.167904  232335 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1030 23:39:27.167911  232335 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1030 23:39:27.167917  232335 command_runner.go:130] > [crio.stats]
	I1030 23:39:27.167923  232335 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1030 23:39:27.167931  232335 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1030 23:39:27.167938  232335 command_runner.go:130] > # stats_collection_period = 0
	I1030 23:39:27.167975  232335 command_runner.go:130] ! time="2023-10-30 23:39:27.151952695Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1030 23:39:27.167990  232335 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1030 23:39:27.168055  232335 cni.go:84] Creating CNI manager for ""
	I1030 23:39:27.168065  232335 cni.go:136] 3 nodes found, recommending kindnet
	I1030 23:39:27.168076  232335 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1030 23:39:27.168096  232335 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-370491 NodeName:multinode-370491-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1030 23:39:27.168214  232335 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-370491-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1030 23:39:27.168264  232335 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-370491-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1030 23:39:27.168311  232335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1030 23:39:27.176853  232335 command_runner.go:130] > kubeadm
	I1030 23:39:27.176875  232335 command_runner.go:130] > kubectl
	I1030 23:39:27.176883  232335 command_runner.go:130] > kubelet
	I1030 23:39:27.176984  232335 binaries.go:44] Found k8s binaries, skipping transfer
	I1030 23:39:27.177047  232335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1030 23:39:27.185705  232335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1030 23:39:27.201347  232335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1030 23:39:27.216985  232335 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I1030 23:39:27.220396  232335 command_runner.go:130] > 192.168.39.231	control-plane.minikube.internal
	I1030 23:39:27.220683  232335 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:39:27.221037  232335 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:39:27.221128  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:39:27.221173  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:39:27.236235  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I1030 23:39:27.236722  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:39:27.237209  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:39:27.237232  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:39:27.237580  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:39:27.237840  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:39:27.237996  232335 start.go:304] JoinCluster: &{Name:multinode-370491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-370491 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.85 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:39:27.238125  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1030 23:39:27.238139  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:39:27.240876  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:39:27.241290  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:39:27.241312  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:39:27.241495  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:39:27.241662  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:39:27.241884  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:39:27.242052  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:39:27.426043  232335 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token clo22l.u3vbhh6us4akdrqs --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1030 23:39:27.427748  232335 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1030 23:39:27.427791  232335 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:39:27.428110  232335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:39:27.428153  232335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:39:27.442910  232335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40699
	I1030 23:39:27.443265  232335 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:39:27.443733  232335 main.go:141] libmachine: Using API Version  1
	I1030 23:39:27.443759  232335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:39:27.444125  232335 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:39:27.444296  232335 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:39:27.444476  232335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-370491-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1030 23:39:27.444501  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:39:27.447439  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:39:27.447838  232335 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:39:27.447871  232335 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:39:27.448031  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:39:27.448219  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:39:27.448374  232335 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:39:27.448510  232335 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:39:27.610269  232335 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1030 23:39:27.682208  232335 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-m45c4, kube-system/kube-proxy-tv2b7
	I1030 23:39:30.714274  232335 command_runner.go:130] > node/multinode-370491-m03 cordoned
	I1030 23:39:30.714303  232335 command_runner.go:130] > pod "busybox-5bc68d56bd-tgkst" has DeletionTimestamp older than 1 seconds, skipping
	I1030 23:39:30.714310  232335 command_runner.go:130] > node/multinode-370491-m03 drained
	I1030 23:39:30.714330  232335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-370491-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.269833297s)
	I1030 23:39:30.714346  232335 node.go:108] successfully drained node "m03"
	I1030 23:39:30.714691  232335 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:39:30.714922  232335 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:39:30.715215  232335 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1030 23:39:30.715260  232335 round_trippers.go:463] DELETE https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:39:30.715268  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:30.715275  232335 round_trippers.go:473]     Content-Type: application/json
	I1030 23:39:30.715281  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:30.715290  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:30.730704  232335 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1030 23:39:30.730735  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:30.730746  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:30.730754  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:30.730763  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:30.730771  232335 round_trippers.go:580]     Content-Length: 171
	I1030 23:39:30.730779  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:30 GMT
	I1030 23:39:30.730787  232335 round_trippers.go:580]     Audit-Id: 47179736-c26c-4a7c-a2fe-de248c6cfef4
	I1030 23:39:30.730795  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:30.731068  232335 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-370491-m03","kind":"nodes","uid":"5868a069-28a9-411e-b010-48ecb6a9e16b"}}
	I1030 23:39:30.731126  232335 node.go:124] successfully deleted node "m03"
	I1030 23:39:30.731141  232335 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1030 23:39:30.731169  232335 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1030 23:39:30.731189  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token clo22l.u3vbhh6us4akdrqs --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-370491-m03"
	I1030 23:39:30.793558  232335 command_runner.go:130] ! W1030 23:39:30.785119    2415 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1030 23:39:30.793998  232335 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1030 23:39:30.934489  232335 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1030 23:39:30.934521  232335 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1030 23:39:31.670963  232335 command_runner.go:130] > [preflight] Running pre-flight checks
	I1030 23:39:31.670988  232335 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1030 23:39:31.671004  232335 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1030 23:39:31.671026  232335 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1030 23:39:31.671040  232335 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1030 23:39:31.671051  232335 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1030 23:39:31.671064  232335 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1030 23:39:31.671079  232335 command_runner.go:130] > This node has joined the cluster:
	I1030 23:39:31.671093  232335 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1030 23:39:31.671106  232335 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1030 23:39:31.671120  232335 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1030 23:39:31.671628  232335 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1030 23:39:31.955685  232335 start.go:306] JoinCluster complete in 4.717683927s
	I1030 23:39:31.955722  232335 cni.go:84] Creating CNI manager for ""
	I1030 23:39:31.955730  232335 cni.go:136] 3 nodes found, recommending kindnet
	I1030 23:39:31.955796  232335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1030 23:39:31.962461  232335 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1030 23:39:31.962489  232335 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1030 23:39:31.962500  232335 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1030 23:39:31.962512  232335 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1030 23:39:31.962521  232335 command_runner.go:130] > Access: 2023-10-30 23:35:21.496527687 +0000
	I1030 23:39:31.962534  232335 command_runner.go:130] > Modify: 2023-10-30 22:33:43.000000000 +0000
	I1030 23:39:31.962543  232335 command_runner.go:130] > Change: 2023-10-30 23:35:19.562527687 +0000
	I1030 23:39:31.962554  232335 command_runner.go:130] >  Birth: -
	I1030 23:39:31.962758  232335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1030 23:39:31.962779  232335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1030 23:39:31.980768  232335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1030 23:39:32.342648  232335 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1030 23:39:32.346668  232335 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1030 23:39:32.349065  232335 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1030 23:39:32.358829  232335 command_runner.go:130] > daemonset.apps/kindnet configured
	I1030 23:39:32.362324  232335 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:39:32.362545  232335 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:39:32.362828  232335 round_trippers.go:463] GET https://192.168.39.231:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1030 23:39:32.362840  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.362848  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.362853  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.364817  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:39:32.364836  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.364844  232335 round_trippers.go:580]     Audit-Id: 0234789f-0a91-43da-b6d8-14aeb6066543
	I1030 23:39:32.364853  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.364861  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.364868  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.364876  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.364884  232335 round_trippers.go:580]     Content-Length: 291
	I1030 23:39:32.364896  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.364929  232335 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20d25ead-69ff-4f03-b32f-13c215a6d708","resourceVersion":"854","creationTimestamp":"2023-10-30T23:25:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1030 23:39:32.365039  232335 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-370491" context rescaled to 1 replicas
	I1030 23:39:32.365074  232335 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.108 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1030 23:39:32.366810  232335 out.go:177] * Verifying Kubernetes components...
	I1030 23:39:32.368304  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:39:32.382259  232335 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:39:32.382500  232335 kapi.go:59] client config for multinode-370491: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/multinode-370491/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1030 23:39:32.382762  232335 node_ready.go:35] waiting up to 6m0s for node "multinode-370491-m03" to be "Ready" ...
	I1030 23:39:32.382831  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:39:32.382845  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.382857  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.382870  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.385895  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:39:32.385928  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.385936  232335 round_trippers.go:580]     Audit-Id: b455fbf1-c86a-41f0-bbfd-b7645784e635
	I1030 23:39:32.385942  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.385948  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.385956  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.385965  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.385974  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.386088  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m03","uid":"e30dd69f-4e78-4013-9c91-a62319716ad7","resourceVersion":"1191","creationTimestamp":"2023-10-30T23:39:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:39:31Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1030 23:39:32.386415  232335 node_ready.go:49] node "multinode-370491-m03" has status "Ready":"True"
	I1030 23:39:32.386436  232335 node_ready.go:38] duration metric: took 3.656617ms waiting for node "multinode-370491-m03" to be "Ready" ...
	I1030 23:39:32.386448  232335 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:39:32.386527  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I1030 23:39:32.386539  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.386551  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.386564  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.390897  232335 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1030 23:39:32.390919  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.390925  232335 round_trippers.go:580]     Audit-Id: 3175ab44-7f95-409c-ba2c-eb9720651e9c
	I1030 23:39:32.390931  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.390936  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.390941  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.390946  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.390955  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.392762  232335 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1196"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"833","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82076 chars]
	I1030 23:39:32.396238  232335 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.396324  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6pgvt
	I1030 23:39:32.396336  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.396347  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.396359  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.398303  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:39:32.398322  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.398331  232335 round_trippers.go:580]     Audit-Id: 725e6160-4a14-494f-a520-87ebe0651c24
	I1030 23:39:32.398339  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.398346  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.398353  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.398362  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.398377  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.398540  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6pgvt","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d854be1d-ae4e-420a-9853-253f0258915c","resourceVersion":"833","creationTimestamp":"2023-10-30T23:25:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"15f95fad-99f5-4f7c-9ff4-a80ead0cf109","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"15f95fad-99f5-4f7c-9ff4-a80ead0cf109\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1030 23:39:32.399024  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:39:32.399039  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.399050  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.399061  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.401417  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:39:32.401436  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.401446  232335 round_trippers.go:580]     Audit-Id: 0abd2cd3-5bb2-49ca-91c8-bff1d5e7db5d
	I1030 23:39:32.401456  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.401468  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.401482  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.401492  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.401504  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.401643  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:39:32.402004  232335 pod_ready.go:92] pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace has status "Ready":"True"
	I1030 23:39:32.402020  232335 pod_ready.go:81] duration metric: took 5.756536ms waiting for pod "coredns-5dd5756b68-6pgvt" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.402031  232335 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.402084  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-370491
	I1030 23:39:32.402095  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.402106  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.402118  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.403783  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:39:32.403800  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.403809  232335 round_trippers.go:580]     Audit-Id: 32a8ca45-9978-40b6-b814-d96cb3473d90
	I1030 23:39:32.403818  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.403827  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.403837  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.403848  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.403856  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.404197  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-370491","namespace":"kube-system","uid":"eb24307f-f00b-4406-bb05-b18eafd0eca1","resourceVersion":"844","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.231:2379","kubernetes.io/config.hash":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.mirror":"840387190d79e7771c73d8f6fcb777d3","kubernetes.io/config.seen":"2023-10-30T23:25:35.493661052Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1030 23:39:32.404575  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:39:32.404592  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.404603  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.404612  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.406560  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:39:32.406578  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.406588  232335 round_trippers.go:580]     Audit-Id: f48d3c16-6cdc-4e51-a57e-5a102218a9fe
	I1030 23:39:32.406596  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.406604  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.406611  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.406618  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.406650  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.406798  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:39:32.407088  232335 pod_ready.go:92] pod "etcd-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:39:32.407101  232335 pod_ready.go:81] duration metric: took 5.064777ms waiting for pod "etcd-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.407115  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.407168  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-370491
	I1030 23:39:32.407180  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.407190  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.407200  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.409093  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:39:32.409112  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.409121  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.409128  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.409136  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.409144  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.409156  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.409164  232335 round_trippers.go:580]     Audit-Id: 05b00d3b-2871-4ea4-9c7a-db7f93e22899
	I1030 23:39:32.409487  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-370491","namespace":"kube-system","uid":"d1874c7c-46ee-42eb-a395-c0d0138b3422","resourceVersion":"846","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.231:8443","kubernetes.io/config.hash":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.mirror":"377aac2edfa5973c73516a60b3dd1cd5","kubernetes.io/config.seen":"2023-10-30T23:25:35.493664410Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1030 23:39:32.409824  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:39:32.409837  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.409849  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.409858  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.411547  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:39:32.411561  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.411567  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.411572  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.411577  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.411582  232335 round_trippers.go:580]     Audit-Id: 651d0f54-f3bb-414a-9af9-f5a92041baae
	I1030 23:39:32.411589  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.411596  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.411790  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:39:32.412139  232335 pod_ready.go:92] pod "kube-apiserver-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:39:32.412154  232335 pod_ready.go:81] duration metric: took 5.032679ms waiting for pod "kube-apiserver-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.412166  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.412228  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-370491
	I1030 23:39:32.412238  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.412248  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.412261  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.414251  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:39:32.414263  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.414269  232335 round_trippers.go:580]     Audit-Id: 726901ea-07b4-4844-a159-ddb0f50d602c
	I1030 23:39:32.414274  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.414279  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.414287  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.414295  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.414307  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.414519  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-370491","namespace":"kube-system","uid":"4da6c57f-cec4-498b-a390-3fa2f8619a0b","resourceVersion":"827","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.mirror":"55259bd1b9f1e240aa9139582b4696e7","kubernetes.io/config.seen":"2023-10-30T23:25:35.493665415Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1030 23:39:32.414817  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:39:32.414830  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.414841  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.414850  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.416725  232335 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1030 23:39:32.416739  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.416744  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.416750  232335 round_trippers.go:580]     Audit-Id: 20e25440-318d-41f6-878c-5488f033545e
	I1030 23:39:32.416755  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.416760  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.416765  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.416773  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.416879  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:39:32.417136  232335 pod_ready.go:92] pod "kube-controller-manager-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:39:32.417149  232335 pod_ready.go:81] duration metric: took 4.968878ms waiting for pod "kube-controller-manager-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.417160  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.583695  232335 request.go:629] Waited for 166.466253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:39:32.583768  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g9wzd
	I1030 23:39:32.583776  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.583789  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.583804  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.587805  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:39:32.587828  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.587834  232335 round_trippers.go:580]     Audit-Id: e49dfa83-42bc-41e0-9bf3-4a1e98787c46
	I1030 23:39:32.587840  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.587851  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.587858  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.587866  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.587874  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.588201  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g9wzd","generateName":"kube-proxy-","namespace":"kube-system","uid":"9bffc44c-9d7f-4d1c-82e7-f249c53bf452","resourceVersion":"1022","creationTimestamp":"2023-10-30T23:26:30Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:26:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5726 chars]
	I1030 23:39:32.782979  232335 request.go:629] Waited for 194.289259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:39:32.783062  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m02
	I1030 23:39:32.783073  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.783081  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.783086  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.787013  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:39:32.787035  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.787042  232335 round_trippers.go:580]     Audit-Id: 743c9faf-371c-406b-a822-1a72382048b0
	I1030 23:39:32.787048  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.787053  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.787057  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.787062  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.787068  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.787288  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m02","uid":"1aac93c1-84bb-464c-b793-174fc3813672","resourceVersion":"1007","creationTimestamp":"2023-10-30T23:37:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:37:47Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1030 23:39:32.787580  232335 pod_ready.go:92] pod "kube-proxy-g9wzd" in "kube-system" namespace has status "Ready":"True"
	I1030 23:39:32.787598  232335 pod_ready.go:81] duration metric: took 370.43001ms waiting for pod "kube-proxy-g9wzd" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.787611  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tv2b7" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:32.983046  232335 request.go:629] Waited for 195.335322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:39:32.983106  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:39:32.983114  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:32.983125  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:32.983139  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:32.986084  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:39:32.986115  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:32.986125  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:32 GMT
	I1030 23:39:32.986132  232335 round_trippers.go:580]     Audit-Id: 34c11098-70ff-4daa-bfa0-5f339fae62e2
	I1030 23:39:32.986139  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:32.986146  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:32.986153  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:32.986161  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:32.986470  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tv2b7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68314ab-5356-4cd6-a611-f3efd8b2d4e0","resourceVersion":"1136","creationTimestamp":"2023-10-30T23:27:17Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5886 chars]
	I1030 23:39:33.183521  232335 request.go:629] Waited for 196.403043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:39:33.183582  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:39:33.183587  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:33.183595  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:33.183602  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:33.186628  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:39:33.186650  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:33.186660  232335 round_trippers.go:580]     Audit-Id: 82d4fdba-ff84-43c7-a247-2b8a3d04727e
	I1030 23:39:33.186668  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:33.186676  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:33.186686  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:33.186698  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:33.186707  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:33 GMT
	I1030 23:39:33.186979  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m03","uid":"e30dd69f-4e78-4013-9c91-a62319716ad7","resourceVersion":"1191","creationTimestamp":"2023-10-30T23:39:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:39:31Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1030 23:39:33.383804  232335 request.go:629] Waited for 196.386398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:39:33.383886  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tv2b7
	I1030 23:39:33.383902  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:33.383919  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:33.383929  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:33.389195  232335 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1030 23:39:33.389222  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:33.389232  232335 round_trippers.go:580]     Audit-Id: 3e109971-36b0-4bce-8205-32c27093cd38
	I1030 23:39:33.389239  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:33.389247  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:33.389254  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:33.389262  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:33.389273  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:33 GMT
	I1030 23:39:33.389435  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tv2b7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68314ab-5356-4cd6-a611-f3efd8b2d4e0","resourceVersion":"1208","creationTimestamp":"2023-10-30T23:27:17Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5730 chars]
	I1030 23:39:33.583695  232335 request.go:629] Waited for 193.799407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:39:33.583774  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491-m03
	I1030 23:39:33.583782  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:33.583793  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:33.583805  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:33.586477  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:39:33.586498  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:33.586504  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:33 GMT
	I1030 23:39:33.586510  232335 round_trippers.go:580]     Audit-Id: 7d44b007-8823-4ed1-b51c-dfc8227ae61e
	I1030 23:39:33.586515  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:33.586520  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:33.586525  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:33.586530  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:33.586775  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491-m03","uid":"e30dd69f-4e78-4013-9c91-a62319716ad7","resourceVersion":"1191","creationTimestamp":"2023-10-30T23:39:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:39:31Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1030 23:39:33.587059  232335 pod_ready.go:92] pod "kube-proxy-tv2b7" in "kube-system" namespace has status "Ready":"True"
	I1030 23:39:33.587075  232335 pod_ready.go:81] duration metric: took 799.455862ms waiting for pod "kube-proxy-tv2b7" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:33.587084  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:33.783615  232335 request.go:629] Waited for 196.463123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:39:33.783721  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xbsl5
	I1030 23:39:33.783733  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:33.783745  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:33.783760  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:33.786700  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:39:33.786723  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:33.786733  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:33.786743  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:33.786752  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:33.786761  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:33 GMT
	I1030 23:39:33.786767  232335 round_trippers.go:580]     Audit-Id: f26ffa1d-e187-47ec-a98c-25ee56c89112
	I1030 23:39:33.786772  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:33.786931  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xbsl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"eb41a78a-bf80-4546-b7d6-423a8c3ad0e1","resourceVersion":"760","creationTimestamp":"2023-10-30T23:25:47Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8ea24659-b585-4c83-ad95-b587ea718f59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ea24659-b585-4c83-ad95-b587ea718f59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1030 23:39:33.983901  232335 request.go:629] Waited for 196.414006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:39:33.983983  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:39:33.983991  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:33.984003  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:33.984015  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:33.986716  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:39:33.986734  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:33.986742  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:33.986747  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:33 GMT
	I1030 23:39:33.986759  232335 round_trippers.go:580]     Audit-Id: 06d2d20e-e81d-454f-b45e-0a6b2a4f6317
	I1030 23:39:33.986767  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:33.986775  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:33.986782  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:33.986940  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:39:33.987279  232335 pod_ready.go:92] pod "kube-proxy-xbsl5" in "kube-system" namespace has status "Ready":"True"
	I1030 23:39:33.987297  232335 pod_ready.go:81] duration metric: took 400.206131ms waiting for pod "kube-proxy-xbsl5" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:33.987313  232335 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:34.183833  232335 request.go:629] Waited for 196.41886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:39:34.183908  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-370491
	I1030 23:39:34.183916  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:34.184040  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:34.184061  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:34.188049  232335 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1030 23:39:34.188075  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:34.188088  232335 round_trippers.go:580]     Audit-Id: 8ea61a71-b0ea-4677-aae6-214233dc6504
	I1030 23:39:34.188096  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:34.188104  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:34.188110  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:34.188118  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:34.188126  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:34 GMT
	I1030 23:39:34.188459  232335 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-370491","namespace":"kube-system","uid":"b71476bb-1843-4ff9-8639-40ae73b72c8b","resourceVersion":"855","creationTimestamp":"2023-10-30T23:25:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.mirror":"dd3eb04179d9bdc0a8332c92e6e42d18","kubernetes.io/config.seen":"2023-10-30T23:25:35.493666103Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-30T23:25:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1030 23:39:34.383298  232335 request.go:629] Waited for 194.365859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:39:34.383380  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/multinode-370491
	I1030 23:39:34.383388  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:34.383401  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:34.383425  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:34.386410  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:39:34.386431  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:34.386438  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:34.386444  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:34 GMT
	I1030 23:39:34.386453  232335 round_trippers.go:580]     Audit-Id: 13fd16ee-558d-4a61-befe-b686811c91c1
	I1030 23:39:34.386461  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:34.386470  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:34.386480  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:34.386642  232335 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-10-30T23:25:32Z","fieldsType":"FieldsV1 [truncated 6220 chars]
	I1030 23:39:34.387002  232335 pod_ready.go:92] pod "kube-scheduler-multinode-370491" in "kube-system" namespace has status "Ready":"True"
	I1030 23:39:34.387020  232335 pod_ready.go:81] duration metric: took 399.694317ms waiting for pod "kube-scheduler-multinode-370491" in "kube-system" namespace to be "Ready" ...
	I1030 23:39:34.387034  232335 pod_ready.go:38] duration metric: took 2.000572318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1030 23:39:34.387055  232335 system_svc.go:44] waiting for kubelet service to be running ....
	I1030 23:39:34.387113  232335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:39:34.400507  232335 system_svc.go:56] duration metric: took 13.444456ms WaitForService to wait for kubelet.
	I1030 23:39:34.400541  232335 kubeadm.go:581] duration metric: took 2.035434064s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1030 23:39:34.400569  232335 node_conditions.go:102] verifying NodePressure condition ...
	I1030 23:39:34.582932  232335 request.go:629] Waited for 182.25482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I1030 23:39:34.582992  232335 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I1030 23:39:34.582996  232335 round_trippers.go:469] Request Headers:
	I1030 23:39:34.583004  232335 round_trippers.go:473]     Accept: application/json, */*
	I1030 23:39:34.583011  232335 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1030 23:39:34.585949  232335 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1030 23:39:34.585972  232335 round_trippers.go:577] Response Headers:
	I1030 23:39:34.585980  232335 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1f683e54-adbd-41fb-b9ba-b5685a1f82ba
	I1030 23:39:34.585985  232335 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3375c77d-97e6-4fde-a5dd-b82e4c83bef8
	I1030 23:39:34.585991  232335 round_trippers.go:580]     Date: Mon, 30 Oct 2023 23:39:34 GMT
	I1030 23:39:34.585996  232335 round_trippers.go:580]     Audit-Id: 119faa42-bed4-481d-838f-13cf7431e4f3
	I1030 23:39:34.586001  232335 round_trippers.go:580]     Cache-Control: no-cache, private
	I1030 23:39:34.586006  232335 round_trippers.go:580]     Content-Type: application/json
	I1030 23:39:34.586433  232335 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1212"},"items":[{"metadata":{"name":"multinode-370491","uid":"8074d74a-99d0-44d8-8118-55f1baef45bc","resourceVersion":"863","creationTimestamp":"2023-10-30T23:25:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-370491","kubernetes.io/os":"linux","minikube.k8s.io/commit":"462855d35e0791a9ef0dc759d2782e987ae8f7f4","minikube.k8s.io/name":"multinode-370491","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_30T23_25_36_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 15141 chars]
	I1030 23:39:34.586990  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:39:34.587008  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:39:34.587019  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:39:34.587023  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:39:34.587027  232335 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1030 23:39:34.587030  232335 node_conditions.go:123] node cpu capacity is 2
	I1030 23:39:34.587034  232335 node_conditions.go:105] duration metric: took 186.460264ms to run NodePressure ...
	I1030 23:39:34.587045  232335 start.go:228] waiting for startup goroutines ...
	I1030 23:39:34.587063  232335 start.go:242] writing updated cluster config ...
	I1030 23:39:34.587377  232335 ssh_runner.go:195] Run: rm -f paused
	I1030 23:39:34.637472  232335 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1030 23:39:34.640364  232335 out.go:177] * Done! kubectl is now configured to use "multinode-370491" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-30 23:35:20 UTC, ends at Mon 2023-10-30 23:39:35 UTC. --
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.794756891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e79c6f2f-e6ed-4918-8c06-10548f8bf317 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.796432478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=50daa099-c359-488a-b2cf-d8a30058f2c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.796851672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698709175796838048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=50daa099-c359-488a-b2cf-d8a30058f2c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.797500812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d9e744f6-1b9e-4559-a983-1f77dc47fd6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.797660525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d9e744f6-1b9e-4559-a983-1f77dc47fd6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.797912090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e41791e5d17ef714d25b737488caf258dc170de6f1a1b4018b274ad9bbc2f75,PodSandboxId:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698708985228045390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa73f543ba84059b6216e0a412f479ca285387feb7e812a82803b3ceff5c5677,PodSandboxId:b249b5e3da0dae31c588fabad9915088bff20f04d9548832e95f0be928c6d635,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698708962755628576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7649ab49,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ec200313c9cad65147ab68c715b246d37344c8b5d249ccc95019693797d10e,PodSandboxId:604c5102dae912f9b35ab4c51b14130e39dcccb34a16b17132dbbe5f3b76eaf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698708961512523005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,},Annotations:map[string]string{io.kubernetes.container.hash: 11ac28ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532c98b5ff4b4966b4e96fcf033cd7526bc3f8999d52372dbf46163c6c86088,PodSandboxId:ddda496e91b862b50bd284bc0e06e6b39fcb331bfc978d6ea90240473ae16feb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698708956692801760,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a79ceb52-48df-4240-9edc-05c81bf58f73,},Annotations:map[string]string{io.kubernetes.container.hash: 393bde1a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35deb9dd61a4ab129558ba94c83c819a6d0b1e7b89d90835170589f0eea474bf,PodSandboxId:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698708954072107687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b4d75864ed651a9538f08e5ba7efb78116a5f138699ad8871614d7498f0122,PodSandboxId:b225030659008b055c6b2c1adc7c9e12ab062fa912bff4873e8d609d57f4407c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698708954141614516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c3a
d0e1,},Annotations:map[string]string{io.kubernetes.container.hash: b2372445,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d9ee7545d707c144c88911508813b4623dd460dbf23ac1ca53c38a0eb3906e,PodSandboxId:7536043a43d4a7b3e6d30b05516880d31f4874a6b9f149071dd556ecd7934a18,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698708947569283571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a0e21061,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53683b2ecb941e708d8c17f72ce1e6aed31d59a992e963e8973e0b9478a776fa,PodSandboxId:d71dd6439aa9600ab99bda9ae4b47cefbf27756a75295548961f8c4b31b891f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698708947471079595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,},Annotations:map[string]string{io.kubernetes.container.has
h: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b5acbb071119e5db47edeebcae15e7cd63aa6d415c608934fa6516eca4585f,PodSandboxId:d65c16d53f6f3524f3ce50c02959db63360ef0f4a037e582d6c3249cab029c51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698708947143797194,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 4e859895,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316ecfe87c0e64b42de40ae866a40306865631dd99af44eb29f3632a9cae047e,PodSandboxId:62e29da5962deaafa1f1ddaa34c686d8a85409a1d14016d8b9ab2ec46f639e3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698708947072942109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d9e744f6-1b9e-4559-a983-1f77dc47fd6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.838470097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a956106f-e4e5-4862-a8d7-ad7c4fef27f3 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.838608555Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a956106f-e4e5-4862-a8d7-ad7c4fef27f3 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.839893712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0f2150ed-7521-4c65-add2-35bf399907f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.840261082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698709175840249024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0f2150ed-7521-4c65-add2-35bf399907f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.840857197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=082d72d6-ac13-4eff-9de8-d516020f553b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.840928863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=082d72d6-ac13-4eff-9de8-d516020f553b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.841125706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e41791e5d17ef714d25b737488caf258dc170de6f1a1b4018b274ad9bbc2f75,PodSandboxId:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698708985228045390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa73f543ba84059b6216e0a412f479ca285387feb7e812a82803b3ceff5c5677,PodSandboxId:b249b5e3da0dae31c588fabad9915088bff20f04d9548832e95f0be928c6d635,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698708962755628576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7649ab49,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ec200313c9cad65147ab68c715b246d37344c8b5d249ccc95019693797d10e,PodSandboxId:604c5102dae912f9b35ab4c51b14130e39dcccb34a16b17132dbbe5f3b76eaf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698708961512523005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,},Annotations:map[string]string{io.kubernetes.container.hash: 11ac28ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532c98b5ff4b4966b4e96fcf033cd7526bc3f8999d52372dbf46163c6c86088,PodSandboxId:ddda496e91b862b50bd284bc0e06e6b39fcb331bfc978d6ea90240473ae16feb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698708956692801760,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a79ceb52-48df-4240-9edc-05c81bf58f73,},Annotations:map[string]string{io.kubernetes.container.hash: 393bde1a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35deb9dd61a4ab129558ba94c83c819a6d0b1e7b89d90835170589f0eea474bf,PodSandboxId:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698708954072107687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b4d75864ed651a9538f08e5ba7efb78116a5f138699ad8871614d7498f0122,PodSandboxId:b225030659008b055c6b2c1adc7c9e12ab062fa912bff4873e8d609d57f4407c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698708954141614516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c3a
d0e1,},Annotations:map[string]string{io.kubernetes.container.hash: b2372445,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d9ee7545d707c144c88911508813b4623dd460dbf23ac1ca53c38a0eb3906e,PodSandboxId:7536043a43d4a7b3e6d30b05516880d31f4874a6b9f149071dd556ecd7934a18,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698708947569283571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a0e21061,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53683b2ecb941e708d8c17f72ce1e6aed31d59a992e963e8973e0b9478a776fa,PodSandboxId:d71dd6439aa9600ab99bda9ae4b47cefbf27756a75295548961f8c4b31b891f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698708947471079595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,},Annotations:map[string]string{io.kubernetes.container.has
h: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b5acbb071119e5db47edeebcae15e7cd63aa6d415c608934fa6516eca4585f,PodSandboxId:d65c16d53f6f3524f3ce50c02959db63360ef0f4a037e582d6c3249cab029c51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698708947143797194,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 4e859895,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316ecfe87c0e64b42de40ae866a40306865631dd99af44eb29f3632a9cae047e,PodSandboxId:62e29da5962deaafa1f1ddaa34c686d8a85409a1d14016d8b9ab2ec46f639e3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698708947072942109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=082d72d6-ac13-4eff-9de8-d516020f553b name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.879408666Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=345b28c5-6efa-484b-903e-81f6c92607ed name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.879707464Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:604c5102dae912f9b35ab4c51b14130e39dcccb34a16b17132dbbe5f3b76eaf3,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-6pgvt,Uid:d854be1d-ae4e-420a-9853-253f0258915c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698708960869703519,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-30T23:35:52.963593508Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b249b5e3da0dae31c588fabad9915088bff20f04d9548832e95f0be928c6d635,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-7hhs5,Uid:2c28c851-1dbe-434e-a041-4bf33b87bd7b,Namespace:default,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1698708960861236588,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-30T23:35:52.963525798Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6f2bbacd-e138-4f82-961e-76f1daf88ccd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698708953345097571,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]st
ring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-30T23:35:52.963592179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ddda496e91b862b50bd284bc0e06e6b39fcb331bfc978d6ea90240473ae16feb,Metadata:&PodSandboxMetadata{Name:kindnet-m9f5k,Uid:a79ceb52-48df-4240-9edc-05c81bf58f73,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1698708953312937676,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79ceb52-48df-4240-9edc-05c81bf58f73,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-30T23:35:52.963528094Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b225030659008b055c6b2c1adc7c9e12ab062fa912bff4873e8d609d57f4407c,Metadata:&PodSandboxMetadata{Name:kube-proxy-xbsl5,Uid:eb41a78a-bf80-4546-b7d6-423a8c3ad0e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698708953303116466,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c3ad0e1,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-30T23:35:52.963524072Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d71dd6439aa9600ab99bda9ae4b47cefbf27756a75295548961f8c4b31b891f7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-370491,Uid:dd3eb04179d9bdc0a8332c92e6e42d18,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698708946524769530,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dd3eb04179d9bdc0a8332c92e6e42d18,kubernetes.io/config.seen: 2023-10-30T23:35:45.962804107Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:62e29da5962deaafa1f1ddaa34c686d8a85409a1d14016d8b9ab2ec46f639e3d,Metadata:&PodSandboxMetadata{Name:kube-controller-mana
ger-multinode-370491,Uid:55259bd1b9f1e240aa9139582b4696e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698708946519704825,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 55259bd1b9f1e240aa9139582b4696e7,kubernetes.io/config.seen: 2023-10-30T23:35:45.962803422Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7536043a43d4a7b3e6d30b05516880d31f4874a6b9f149071dd556ecd7934a18,Metadata:&PodSandboxMetadata{Name:etcd-multinode-370491,Uid:840387190d79e7771c73d8f6fcb777d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698708946509635810,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.231:2379,kubernetes.io/config.hash: 840387190d79e7771c73d8f6fcb777d3,kubernetes.io/config.seen: 2023-10-30T23:35:45.962798776Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d65c16d53f6f3524f3ce50c02959db63360ef0f4a037e582d6c3249cab029c51,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-370491,Uid:377aac2edfa5973c73516a60b3dd1cd5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698708946460836604,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.231:8443,kuberne
tes.io/config.hash: 377aac2edfa5973c73516a60b3dd1cd5,kubernetes.io/config.seen: 2023-10-30T23:35:45.962802435Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=345b28c5-6efa-484b-903e-81f6c92607ed name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.880687942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7e7c2f2f-d1f4-4b08-a224-b5bae5060d3e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.880763006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7e7c2f2f-d1f4-4b08-a224-b5bae5060d3e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.881493277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e41791e5d17ef714d25b737488caf258dc170de6f1a1b4018b274ad9bbc2f75,PodSandboxId:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698708985228045390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa73f543ba84059b6216e0a412f479ca285387feb7e812a82803b3ceff5c5677,PodSandboxId:b249b5e3da0dae31c588fabad9915088bff20f04d9548832e95f0be928c6d635,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698708962755628576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7649ab49,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ec200313c9cad65147ab68c715b246d37344c8b5d249ccc95019693797d10e,PodSandboxId:604c5102dae912f9b35ab4c51b14130e39dcccb34a16b17132dbbe5f3b76eaf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698708961512523005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,},Annotations:map[string]string{io.kubernetes.container.hash: 11ac28ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532c98b5ff4b4966b4e96fcf033cd7526bc3f8999d52372dbf46163c6c86088,PodSandboxId:ddda496e91b862b50bd284bc0e06e6b39fcb331bfc978d6ea90240473ae16feb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698708956692801760,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a79ceb52-48df-4240-9edc-05c81bf58f73,},Annotations:map[string]string{io.kubernetes.container.hash: 393bde1a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35deb9dd61a4ab129558ba94c83c819a6d0b1e7b89d90835170589f0eea474bf,PodSandboxId:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698708954072107687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b4d75864ed651a9538f08e5ba7efb78116a5f138699ad8871614d7498f0122,PodSandboxId:b225030659008b055c6b2c1adc7c9e12ab062fa912bff4873e8d609d57f4407c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698708954141614516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c3a
d0e1,},Annotations:map[string]string{io.kubernetes.container.hash: b2372445,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d9ee7545d707c144c88911508813b4623dd460dbf23ac1ca53c38a0eb3906e,PodSandboxId:7536043a43d4a7b3e6d30b05516880d31f4874a6b9f149071dd556ecd7934a18,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698708947569283571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a0e21061,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53683b2ecb941e708d8c17f72ce1e6aed31d59a992e963e8973e0b9478a776fa,PodSandboxId:d71dd6439aa9600ab99bda9ae4b47cefbf27756a75295548961f8c4b31b891f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698708947471079595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,},Annotations:map[string]string{io.kubernetes.container.has
h: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b5acbb071119e5db47edeebcae15e7cd63aa6d415c608934fa6516eca4585f,PodSandboxId:d65c16d53f6f3524f3ce50c02959db63360ef0f4a037e582d6c3249cab029c51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698708947143797194,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 4e859895,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316ecfe87c0e64b42de40ae866a40306865631dd99af44eb29f3632a9cae047e,PodSandboxId:62e29da5962deaafa1f1ddaa34c686d8a85409a1d14016d8b9ab2ec46f639e3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698708947072942109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7e7c2f2f-d1f4-4b08-a224-b5bae5060d3e name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.884278051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6f00e06b-4b58-4e91-9d6b-7bf33263a7f5 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.884371168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6f00e06b-4b58-4e91-9d6b-7bf33263a7f5 name=/runtime.v1.RuntimeService/Version
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.885971678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=50eaecf3-3db7-49fa-97fb-e29852ab685b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.886421247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698709175886374326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=50eaecf3-3db7-49fa-97fb-e29852ab685b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.887376120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d44c6471-907f-47bf-9a77-f2b430cf23fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.887442595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d44c6471-907f-47bf-9a77-f2b430cf23fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 30 23:39:35 multinode-370491 crio[709]: time="2023-10-30 23:39:35.887737638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e41791e5d17ef714d25b737488caf258dc170de6f1a1b4018b274ad9bbc2f75,PodSandboxId:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698708985228045390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa73f543ba84059b6216e0a412f479ca285387feb7e812a82803b3ceff5c5677,PodSandboxId:b249b5e3da0dae31c588fabad9915088bff20f04d9548832e95f0be928c6d635,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698708962755628576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7hhs5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c28c851-1dbe-434e-a041-4bf33b87bd7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7649ab49,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ec200313c9cad65147ab68c715b246d37344c8b5d249ccc95019693797d10e,PodSandboxId:604c5102dae912f9b35ab4c51b14130e39dcccb34a16b17132dbbe5f3b76eaf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698708961512523005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pgvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d854be1d-ae4e-420a-9853-253f0258915c,},Annotations:map[string]string{io.kubernetes.container.hash: 11ac28ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532c98b5ff4b4966b4e96fcf033cd7526bc3f8999d52372dbf46163c6c86088,PodSandboxId:ddda496e91b862b50bd284bc0e06e6b39fcb331bfc978d6ea90240473ae16feb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698708956692801760,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m9f5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a79ceb52-48df-4240-9edc-05c81bf58f73,},Annotations:map[string]string{io.kubernetes.container.hash: 393bde1a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35deb9dd61a4ab129558ba94c83c819a6d0b1e7b89d90835170589f0eea474bf,PodSandboxId:8af6a063aac2d08b9ba1d863a21c3594427b067d7745ce1778e7725bb7339feb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698708954072107687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 6f2bbacd-e138-4f82-961e-76f1daf88ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 324ceadb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79b4d75864ed651a9538f08e5ba7efb78116a5f138699ad8871614d7498f0122,PodSandboxId:b225030659008b055c6b2c1adc7c9e12ab062fa912bff4873e8d609d57f4407c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698708954141614516,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xbsl5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb41a78a-bf80-4546-b7d6-423a8c3a
d0e1,},Annotations:map[string]string{io.kubernetes.container.hash: b2372445,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d9ee7545d707c144c88911508813b4623dd460dbf23ac1ca53c38a0eb3906e,PodSandboxId:7536043a43d4a7b3e6d30b05516880d31f4874a6b9f149071dd556ecd7934a18,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698708947569283571,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840387190d79e7771c73d8f6fcb777d3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a0e21061,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53683b2ecb941e708d8c17f72ce1e6aed31d59a992e963e8973e0b9478a776fa,PodSandboxId:d71dd6439aa9600ab99bda9ae4b47cefbf27756a75295548961f8c4b31b891f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698708947471079595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd3eb04179d9bdc0a8332c92e6e42d18,},Annotations:map[string]string{io.kubernetes.container.has
h: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b5acbb071119e5db47edeebcae15e7cd63aa6d415c608934fa6516eca4585f,PodSandboxId:d65c16d53f6f3524f3ce50c02959db63360ef0f4a037e582d6c3249cab029c51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698708947143797194,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377aac2edfa5973c73516a60b3dd1cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 4e859895,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316ecfe87c0e64b42de40ae866a40306865631dd99af44eb29f3632a9cae047e,PodSandboxId:62e29da5962deaafa1f1ddaa34c686d8a85409a1d14016d8b9ab2ec46f639e3d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698708947072942109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370491,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55259bd1b9f1e240aa9139582b4696e7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d44c6471-907f-47bf-9a77-f2b430cf23fb name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1e41791e5d17e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   8af6a063aac2d       storage-provisioner
	fa73f543ba840       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   b249b5e3da0da       busybox-5bc68d56bd-7hhs5
	e2ec200313c9c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   604c5102dae91       coredns-5dd5756b68-6pgvt
	2532c98b5ff4b       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   ddda496e91b86       kindnet-m9f5k
	79b4d75864ed6       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      3 minutes ago       Running             kube-proxy                1                   b225030659008       kube-proxy-xbsl5
	35deb9dd61a4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   8af6a063aac2d       storage-provisioner
	71d9ee7545d70       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   7536043a43d4a       etcd-multinode-370491
	53683b2ecb941       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      3 minutes ago       Running             kube-scheduler            1                   d71dd6439aa96       kube-scheduler-multinode-370491
	58b5acbb07111       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      3 minutes ago       Running             kube-apiserver            1                   d65c16d53f6f3       kube-apiserver-multinode-370491
	316ecfe87c0e6       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      3 minutes ago       Running             kube-controller-manager   1                   62e29da5962de       kube-controller-manager-multinode-370491
	
	* 
	* ==> coredns [e2ec200313c9cad65147ab68c715b246d37344c8b5d249ccc95019693797d10e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39951 - 23440 "HINFO IN 4204776537665063566.9172381085974658144. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012075935s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-370491
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370491
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=multinode-370491
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_30T23_25_36_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:25:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370491
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Oct 2023 23:39:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Oct 2023 23:36:22 +0000   Mon, 30 Oct 2023 23:25:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Oct 2023 23:36:22 +0000   Mon, 30 Oct 2023 23:25:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Oct 2023 23:36:22 +0000   Mon, 30 Oct 2023 23:25:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Oct 2023 23:36:22 +0000   Mon, 30 Oct 2023 23:35:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    multinode-370491
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 636fd736a51348bda33817c729308277
	  System UUID:                636fd736-a513-48bd-a338-17c729308277
	  Boot ID:                    2eb85dae-5393-4efa-ac0c-c75a1e5a7e38
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-7hhs5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-6pgvt                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-370491                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-m9f5k                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-370491             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-370491    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-xbsl5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-370491             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-370491 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-370491 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-370491 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-370491 event: Registered Node multinode-370491 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-370491 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-370491 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-370491 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-370491 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-370491 event: Registered Node multinode-370491 in Controller
	
	
	Name:               multinode-370491-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370491-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:37:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370491-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Oct 2023 23:39:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Oct 2023 23:37:47 +0000   Mon, 30 Oct 2023 23:37:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Oct 2023 23:37:47 +0000   Mon, 30 Oct 2023 23:37:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Oct 2023 23:37:47 +0000   Mon, 30 Oct 2023 23:37:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Oct 2023 23:37:47 +0000   Mon, 30 Oct 2023 23:37:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.85
	  Hostname:    multinode-370491-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 09caef3963124bd193408d450ad01051
	  System UUID:                09caef39-6312-4bd1-9340-8d450ad01051
	  Boot ID:                    35b2b885-d6bb-4e06-ab65-0931f0b0b4da
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x4lrn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-76g2q               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-g9wzd            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   Starting                 106s                 kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)    kubelet     Node multinode-370491-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)    kubelet     Node multinode-370491-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)    kubelet     Node multinode-370491-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m53s                kubelet     Node multinode-370491-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m8s (x2 over 3m8s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       112s                 kubelet     Node multinode-370491-m02 status is now: NodeNotSchedulable
	  Normal   NodeReady                112s (x2 over 12m)   kubelet     Node multinode-370491-m02 status is now: NodeReady
	  Normal   Starting                 109s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  109s (x2 over 109s)  kubelet     Node multinode-370491-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    109s (x2 over 109s)  kubelet     Node multinode-370491-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s (x2 over 109s)  kubelet     Node multinode-370491-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  109s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                109s                 kubelet     Node multinode-370491-m02 status is now: NodeReady
	
	
	Name:               multinode-370491-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370491-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:39:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-370491-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Oct 2023 23:39:31 +0000   Mon, 30 Oct 2023 23:39:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Oct 2023 23:39:31 +0000   Mon, 30 Oct 2023 23:39:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Oct 2023 23:39:31 +0000   Mon, 30 Oct 2023 23:39:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Oct 2023 23:39:31 +0000   Mon, 30 Oct 2023 23:39:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    multinode-370491-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 86b27936b5ef4534aa74c4e8a7f18fcf
	  System UUID:                86b27936-b5ef-4534-aa74-c4e8a7f18fcf
	  Boot ID:                    7c5b7c64-1b5b-4886-9f80-b530fa3d0f86
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-tgkst    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kindnet-m45c4               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-tv2b7            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-370491-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-370491-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-370491-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-370491-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-370491-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-370491-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-370491-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             71s                kubelet     Node multinode-370491-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        38s (x2 over 98s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeReady                9s (x2 over 11m)   kubelet     Node multinode-370491-m03 status is now: NodeReady
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-370491-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-370491-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-370491-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-370491-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Oct30 23:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067172] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.338372] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.486135] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153027] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.475439] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.611539] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.130858] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.157909] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.106041] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.212186] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +16.931816] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[Oct30 23:36] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [71d9ee7545d707c144c88911508813b4623dd460dbf23ac1ca53c38a0eb3906e] <==
	* {"level":"info","ts":"2023-10-30T23:35:49.3221Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-30T23:35:49.322109Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-30T23:35:49.322672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 switched to configuration voters=(7674903412691839616)"}
	{"level":"info","ts":"2023-10-30T23:35:49.322902Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a20717615099fdd","local-member-id":"6a82bbfd8eee2a80","added-peer-id":"6a82bbfd8eee2a80","added-peer-peer-urls":["https://192.168.39.231:2380"]}
	{"level":"info","ts":"2023-10-30T23:35:49.327879Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a20717615099fdd","local-member-id":"6a82bbfd8eee2a80","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-30T23:35:49.328033Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-30T23:35:49.338029Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-30T23:35:49.338252Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6a82bbfd8eee2a80","initial-advertise-peer-urls":["https://192.168.39.231:2380"],"listen-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.231:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-30T23:35:49.338277Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-30T23:35:49.338318Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2023-10-30T23:35:49.338323Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2023-10-30T23:35:50.679877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-30T23:35:50.680021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-30T23:35:50.68009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 received MsgPreVoteResp from 6a82bbfd8eee2a80 at term 2"}
	{"level":"info","ts":"2023-10-30T23:35:50.680134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became candidate at term 3"}
	{"level":"info","ts":"2023-10-30T23:35:50.680164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 received MsgVoteResp from 6a82bbfd8eee2a80 at term 3"}
	{"level":"info","ts":"2023-10-30T23:35:50.680195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 became leader at term 3"}
	{"level":"info","ts":"2023-10-30T23:35:50.680227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a82bbfd8eee2a80 elected leader 6a82bbfd8eee2a80 at term 3"}
	{"level":"info","ts":"2023-10-30T23:35:50.683179Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6a82bbfd8eee2a80","local-member-attributes":"{Name:multinode-370491 ClientURLs:[https://192.168.39.231:2379]}","request-path":"/0/members/6a82bbfd8eee2a80/attributes","cluster-id":"1a20717615099fdd","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-30T23:35:50.683204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-30T23:35:50.683417Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-30T23:35:50.683472Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-30T23:35:50.68323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-30T23:35:50.685085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-30T23:35:50.685082Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.231:2379"}
	
	* 
	* ==> kernel <==
	*  23:39:36 up 4 min,  0 users,  load average: 0.19, 0.28, 0.14
	Linux multinode-370491 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2532c98b5ff4b4966b4e96fcf033cd7526bc3f8999d52372dbf46163c6c86088] <==
	* I1030 23:38:48.435417       1 main.go:250] Node multinode-370491-m03 has CIDR [10.244.3.0/24] 
	I1030 23:38:58.444001       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:38:58.444057       1 main.go:227] handling current node
	I1030 23:38:58.444159       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I1030 23:38:58.444169       1 main.go:250] Node multinode-370491-m02 has CIDR [10.244.1.0/24] 
	I1030 23:38:58.444468       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I1030 23:38:58.444665       1 main.go:250] Node multinode-370491-m03 has CIDR [10.244.3.0/24] 
	I1030 23:39:08.458065       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:39:08.458162       1 main.go:227] handling current node
	I1030 23:39:08.458189       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I1030 23:39:08.458225       1 main.go:250] Node multinode-370491-m02 has CIDR [10.244.1.0/24] 
	I1030 23:39:08.458354       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I1030 23:39:08.458386       1 main.go:250] Node multinode-370491-m03 has CIDR [10.244.3.0/24] 
	I1030 23:39:18.467147       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:39:18.467329       1 main.go:227] handling current node
	I1030 23:39:18.467372       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I1030 23:39:18.467396       1 main.go:250] Node multinode-370491-m02 has CIDR [10.244.1.0/24] 
	I1030 23:39:18.467516       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I1030 23:39:18.467648       1 main.go:250] Node multinode-370491-m03 has CIDR [10.244.3.0/24] 
	I1030 23:39:28.477258       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I1030 23:39:28.477455       1 main.go:227] handling current node
	I1030 23:39:28.477678       1 main.go:223] Handling node with IPs: map[192.168.39.85:{}]
	I1030 23:39:28.477801       1 main.go:250] Node multinode-370491-m02 has CIDR [10.244.1.0/24] 
	I1030 23:39:28.478275       1 main.go:223] Handling node with IPs: map[192.168.39.108:{}]
	I1030 23:39:28.478309       1 main.go:250] Node multinode-370491-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [58b5acbb071119e5db47edeebcae15e7cd63aa6d415c608934fa6516eca4585f] <==
	* I1030 23:35:51.991049       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1030 23:35:51.991382       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1030 23:35:51.991515       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1030 23:35:52.164514       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1030 23:35:52.184203       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1030 23:35:52.184247       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1030 23:35:52.185301       1 shared_informer.go:318] Caches are synced for configmaps
	I1030 23:35:52.186000       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1030 23:35:52.186433       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1030 23:35:52.189927       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1030 23:35:52.192195       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1030 23:35:52.192388       1 aggregator.go:166] initial CRD sync complete...
	I1030 23:35:52.192454       1 autoregister_controller.go:141] Starting autoregister controller
	I1030 23:35:52.192479       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1030 23:35:52.192502       1 cache.go:39] Caches are synced for autoregister controller
	E1030 23:35:52.224747       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1030 23:35:52.241062       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1030 23:35:52.990193       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1030 23:35:54.903856       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1030 23:35:55.095462       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1030 23:35:55.115170       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1030 23:35:55.185872       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1030 23:35:55.192429       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1030 23:36:04.835351       1 controller.go:624] quota admission added evaluator for: endpoints
	I1030 23:36:04.842384       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [316ecfe87c0e64b42de40ae866a40306865631dd99af44eb29f3632a9cae047e] <==
	* I1030 23:37:47.529448       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-370491-m03"
	I1030 23:37:47.530964       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370491-m02\" does not exist"
	I1030 23:37:47.550238       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-370491-m02" podCIDRs=["10.244.1.0/24"]
	I1030 23:37:47.870118       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-370491-m02"
	I1030 23:37:48.441417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.933µs"
	I1030 23:37:59.691342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="145.903µs"
	I1030 23:38:00.306391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="83.889µs"
	I1030 23:38:00.309811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="100.911µs"
	I1030 23:38:25.528120       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-370491-m02"
	I1030 23:39:25.688934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="201.026µs"
	I1030 23:39:27.292659       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-370491-m02"
	I1030 23:39:27.714133       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-x4lrn"
	I1030 23:39:27.733776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.536664ms"
	I1030 23:39:27.749420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.541555ms"
	I1030 23:39:27.749664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.157µs"
	I1030 23:39:27.756809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.491µs"
	I1030 23:39:29.580424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.593396ms"
	I1030 23:39:29.581165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="238.225µs"
	I1030 23:39:29.875677       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-tgkst" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-tgkst"
	I1030 23:39:30.721805       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-370491-m02"
	I1030 23:39:31.361080       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370491-m03\" does not exist"
	I1030 23:39:31.361278       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-370491-m02"
	I1030 23:39:31.397071       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-370491-m03" podCIDRs=["10.244.2.0/24"]
	I1030 23:39:31.508376       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-370491-m03"
	I1030 23:39:32.263918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.051µs"
	
	* 
	* ==> kube-proxy [79b4d75864ed651a9538f08e5ba7efb78116a5f138699ad8871614d7498f0122] <==
	* I1030 23:35:54.537666       1 server_others.go:69] "Using iptables proxy"
	I1030 23:35:54.560408       1 node.go:141] Successfully retrieved node IP: 192.168.39.231
	I1030 23:35:54.620790       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1030 23:35:54.624591       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 23:35:54.627504       1 server_others.go:152] "Using iptables Proxier"
	I1030 23:35:54.627666       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1030 23:35:54.627878       1 server.go:846] "Version info" version="v1.28.3"
	I1030 23:35:54.628064       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 23:35:54.628766       1 config.go:188] "Starting service config controller"
	I1030 23:35:54.628854       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1030 23:35:54.629031       1 config.go:97] "Starting endpoint slice config controller"
	I1030 23:35:54.629086       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1030 23:35:54.629504       1 config.go:315] "Starting node config controller"
	I1030 23:35:54.629699       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1030 23:35:54.729120       1 shared_informer.go:318] Caches are synced for service config
	I1030 23:35:54.729191       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1030 23:35:54.729863       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [53683b2ecb941e708d8c17f72ce1e6aed31d59a992e963e8973e0b9478a776fa] <==
	* I1030 23:35:49.541618       1 serving.go:348] Generated self-signed cert in-memory
	W1030 23:35:52.075051       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1030 23:35:52.075098       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1030 23:35:52.075110       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1030 23:35:52.075117       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1030 23:35:52.159856       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1030 23:35:52.159905       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 23:35:52.164170       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1030 23:35:52.170059       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1030 23:35:52.170114       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1030 23:35:52.176187       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1030 23:35:52.277221       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-30 23:35:20 UTC, ends at Mon 2023-10-30 23:39:36 UTC. --
	Oct 30 23:35:54 multinode-370491 kubelet[916]: E1030 23:35:54.664411     916 projected.go:198] Error preparing data for projected volume kube-api-access-qwz4t for pod default/busybox-5bc68d56bd-7hhs5: object "default"/"kube-root-ca.crt" not registered
	Oct 30 23:35:54 multinode-370491 kubelet[916]: E1030 23:35:54.664469     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c28c851-1dbe-434e-a041-4bf33b87bd7b-kube-api-access-qwz4t podName:2c28c851-1dbe-434e-a041-4bf33b87bd7b nodeName:}" failed. No retries permitted until 2023-10-30 23:35:56.66445249 +0000 UTC m=+10.917309949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwz4t" (UniqueName: "kubernetes.io/projected/2c28c851-1dbe-434e-a041-4bf33b87bd7b-kube-api-access-qwz4t") pod "busybox-5bc68d56bd-7hhs5" (UID: "2c28c851-1dbe-434e-a041-4bf33b87bd7b") : object "default"/"kube-root-ca.crt" not registered
	Oct 30 23:35:55 multinode-370491 kubelet[916]: E1030 23:35:55.027015     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-7hhs5" podUID="2c28c851-1dbe-434e-a041-4bf33b87bd7b"
	Oct 30 23:35:55 multinode-370491 kubelet[916]: E1030 23:35:55.027181     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-6pgvt" podUID="d854be1d-ae4e-420a-9853-253f0258915c"
	Oct 30 23:35:56 multinode-370491 kubelet[916]: E1030 23:35:56.578705     916 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 30 23:35:56 multinode-370491 kubelet[916]: E1030 23:35:56.578813     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d854be1d-ae4e-420a-9853-253f0258915c-config-volume podName:d854be1d-ae4e-420a-9853-253f0258915c nodeName:}" failed. No retries permitted until 2023-10-30 23:36:00.578793087 +0000 UTC m=+14.831650529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d854be1d-ae4e-420a-9853-253f0258915c-config-volume") pod "coredns-5dd5756b68-6pgvt" (UID: "d854be1d-ae4e-420a-9853-253f0258915c") : object "kube-system"/"coredns" not registered
	Oct 30 23:35:56 multinode-370491 kubelet[916]: E1030 23:35:56.679433     916 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 30 23:35:56 multinode-370491 kubelet[916]: E1030 23:35:56.679454     916 projected.go:198] Error preparing data for projected volume kube-api-access-qwz4t for pod default/busybox-5bc68d56bd-7hhs5: object "default"/"kube-root-ca.crt" not registered
	Oct 30 23:35:56 multinode-370491 kubelet[916]: E1030 23:35:56.679489     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c28c851-1dbe-434e-a041-4bf33b87bd7b-kube-api-access-qwz4t podName:2c28c851-1dbe-434e-a041-4bf33b87bd7b nodeName:}" failed. No retries permitted until 2023-10-30 23:36:00.679478045 +0000 UTC m=+14.932335498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwz4t" (UniqueName: "kubernetes.io/projected/2c28c851-1dbe-434e-a041-4bf33b87bd7b-kube-api-access-qwz4t") pod "busybox-5bc68d56bd-7hhs5" (UID: "2c28c851-1dbe-434e-a041-4bf33b87bd7b") : object "default"/"kube-root-ca.crt" not registered
	Oct 30 23:35:57 multinode-370491 kubelet[916]: E1030 23:35:57.025789     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-7hhs5" podUID="2c28c851-1dbe-434e-a041-4bf33b87bd7b"
	Oct 30 23:35:57 multinode-370491 kubelet[916]: E1030 23:35:57.026167     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-6pgvt" podUID="d854be1d-ae4e-420a-9853-253f0258915c"
	Oct 30 23:35:58 multinode-370491 kubelet[916]: I1030 23:35:58.501879     916 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 30 23:36:25 multinode-370491 kubelet[916]: I1030 23:36:25.199139     916 scope.go:117] "RemoveContainer" containerID="35deb9dd61a4ab129558ba94c83c819a6d0b1e7b89d90835170589f0eea474bf"
	Oct 30 23:36:46 multinode-370491 kubelet[916]: E1030 23:36:46.044014     916 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 30 23:36:46 multinode-370491 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 23:36:46 multinode-370491 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 23:36:46 multinode-370491 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 23:37:46 multinode-370491 kubelet[916]: E1030 23:37:46.043018     916 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 30 23:37:46 multinode-370491 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 23:37:46 multinode-370491 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 23:37:46 multinode-370491 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 30 23:38:46 multinode-370491 kubelet[916]: E1030 23:38:46.044045     916 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 30 23:38:46 multinode-370491 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 30 23:38:46 multinode-370491 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 30 23:38:46 multinode-370491 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-370491 -n multinode-370491
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-370491 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (689.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-370491 stop: exit status 82 (2m1.254251146s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-370491"  ...
	* Stopping node "multinode-370491"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-370491 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-370491 status: exit status 3 (18.669719491s)

                                                
                                                
-- stdout --
	multinode-370491
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-370491-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 23:41:59.113269  234792 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E1030 23:41:59.113324  234792 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-370491 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-370491 -n multinode-370491
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-370491 -n multinode-370491: exit status 3 (3.188646632s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 23:42:02.473429  234875 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E1030 23:42:02.473449  234875 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-370491" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.11s)

                                                
                                    
x
+
TestPreload (284.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-022107 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1030 23:52:08.184892  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:52:17.630738  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-022107 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m22.221293739s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-022107 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-022107 image pull gcr.io/k8s-minikube/busybox: (1.131212391s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-022107
E1030 23:54:14.584398  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:54:30.632313  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-022107: exit status 82 (2m1.446281157s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-022107"  ...
	* Stopping node "test-preload-022107"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-022107 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-10-30 23:54:47.27918881 +0000 UTC m=+3189.529205717
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-022107 -n test-preload-022107
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-022107 -n test-preload-022107: exit status 3 (18.52742339s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 23:55:05.801396  237875 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	E1030 23:55:05.801421  237875 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-022107" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-022107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-022107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-022107: (1.112814827s)
--- FAIL: TestPreload (284.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (155.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3439285495.exe start -p running-upgrade-577087 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1030 23:57:08.185334  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3439285495.exe start -p running-upgrade-577087 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m11.051502036s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-577087 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-577087 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (22.428312358s)

                                                
                                                
-- stdout --
	* [running-upgrade-577087] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-577087 in cluster running-upgrade-577087
	* Updating the running kvm2 "running-upgrade-577087" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 23:59:15.269357  240679 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:59:15.269532  240679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:59:15.269547  240679 out.go:309] Setting ErrFile to fd 2...
	I1030 23:59:15.269555  240679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:59:15.269753  240679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:59:15.270395  240679 out.go:303] Setting JSON to false
	I1030 23:59:15.271389  240679 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27707,"bootTime":1698682648,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:59:15.271460  240679 start.go:138] virtualization: kvm guest
	I1030 23:59:15.273447  240679 out.go:177] * [running-upgrade-577087] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:59:15.275036  240679 notify.go:220] Checking for updates...
	I1030 23:59:15.275039  240679 out.go:177]   - MINIKUBE_LOCATION=17527
	I1030 23:59:15.276678  240679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:59:15.278298  240679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:59:15.279755  240679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:59:15.281184  240679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 23:59:15.282605  240679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 23:59:15.284680  240679 config.go:182] Loaded profile config "running-upgrade-577087": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1030 23:59:15.284705  240679 start_flags.go:697] config upgrade: Driver=kvm2
	I1030 23:59:15.284721  240679 start_flags.go:709] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df
	I1030 23:59:15.284811  240679 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/running-upgrade-577087/config.json ...
	I1030 23:59:15.285677  240679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:59:15.285749  240679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:59:15.301519  240679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I1030 23:59:15.301947  240679 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:59:15.302609  240679 main.go:141] libmachine: Using API Version  1
	I1030 23:59:15.302683  240679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:59:15.303111  240679 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:59:15.303340  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:15.305526  240679 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1030 23:59:15.307100  240679 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:59:15.307389  240679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:59:15.307433  240679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:59:15.324304  240679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37523
	I1030 23:59:15.324837  240679 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:59:15.325283  240679 main.go:141] libmachine: Using API Version  1
	I1030 23:59:15.325307  240679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:59:15.325689  240679 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:59:15.325904  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:15.360715  240679 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 23:59:15.362068  240679 start.go:298] selected driver: kvm2
	I1030 23:59:15.362093  240679 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-577087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.246 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1030 23:59:15.362199  240679 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 23:59:15.362920  240679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.363015  240679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 23:59:15.379641  240679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1030 23:59:15.380035  240679 cni.go:84] Creating CNI manager for ""
	I1030 23:59:15.380057  240679 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1030 23:59:15.380068  240679 start_flags.go:323] config:
	{Name:running-upgrade-577087 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.246 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1030 23:59:15.380295  240679 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.382724  240679 out.go:177] * Starting control plane node running-upgrade-577087 in cluster running-upgrade-577087
	I1030 23:59:15.384185  240679 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1030 23:59:15.413116  240679 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1030 23:59:15.413286  240679 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/running-upgrade-577087/config.json ...
	I1030 23:59:15.413386  240679 cache.go:107] acquiring lock: {Name:mk2c1bd2158b20fe87f0df93928ed44b7086d2b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.413445  240679 cache.go:107] acquiring lock: {Name:mk47cd24dee0f3c893916cdf3d12033bb43118a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.413474  240679 cache.go:107] acquiring lock: {Name:mk6870915238206bee42523a3c61b0972894e28d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.413502  240679 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1030 23:59:15.413520  240679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 146.492µs
	I1030 23:59:15.413533  240679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1030 23:59:15.413550  240679 cache.go:107] acquiring lock: {Name:mkabdd1e84857a847f5f32881a5f748726556ef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.413567  240679 cache.go:107] acquiring lock: {Name:mk00f1816d1ae25ed19174cc1ed4978ea5624e9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.413616  240679 start.go:365] acquiring machines lock for running-upgrade-577087: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1030 23:59:15.413398  240679 cache.go:107] acquiring lock: {Name:mkb343ce6f6fee532319b7f404307089fda8928c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.414012  240679 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1030 23:59:15.414136  240679 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1030 23:59:15.414212  240679 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1030 23:59:15.413610  240679 cache.go:107] acquiring lock: {Name:mk463dc738a6024c485d67e69c733dd1613dbb05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.414439  240679 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1030 23:59:15.414501  240679 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1030 23:59:15.414739  240679 cache.go:107] acquiring lock: {Name:mk698bbf6ca439d1c8312bd77af7026352b426ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:15.414820  240679 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1030 23:59:15.414910  240679 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1030 23:59:15.415666  240679 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1030 23:59:15.415743  240679 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1030 23:59:15.415781  240679 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1030 23:59:15.415974  240679 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1030 23:59:15.416279  240679 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1030 23:59:15.416600  240679 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1030 23:59:15.418103  240679 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1030 23:59:15.583874  240679 cache.go:162] opening:  /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1030 23:59:15.589072  240679 cache.go:162] opening:  /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1030 23:59:15.598713  240679 cache.go:162] opening:  /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1030 23:59:15.600737  240679 cache.go:162] opening:  /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1030 23:59:15.637366  240679 cache.go:162] opening:  /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1030 23:59:15.644371  240679 cache.go:162] opening:  /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1030 23:59:15.659314  240679 cache.go:162] opening:  /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1030 23:59:15.687557  240679 cache.go:157] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1030 23:59:15.687589  240679 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 272.886663ms
	I1030 23:59:15.687606  240679 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1030 23:59:16.255160  240679 cache.go:157] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1030 23:59:16.255189  240679 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 841.639606ms
	I1030 23:59:16.255206  240679 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1030 23:59:16.653282  240679 cache.go:157] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1030 23:59:16.653318  240679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.239817765s
	I1030 23:59:16.653335  240679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1030 23:59:16.666193  240679 cache.go:157] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1030 23:59:16.666229  240679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.2527932s
	I1030 23:59:16.666246  240679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1030 23:59:16.810148  240679 cache.go:157] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1030 23:59:16.810189  240679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.396616362s
	I1030 23:59:16.810207  240679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1030 23:59:17.289518  240679 cache.go:157] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1030 23:59:17.289544  240679 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.876078727s
	I1030 23:59:17.289559  240679 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1030 23:59:17.510589  240679 cache.go:157] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1030 23:59:17.510623  240679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.097233474s
	I1030 23:59:17.510640  240679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1030 23:59:17.510662  240679 cache.go:87] Successfully saved all images to host disk.
	I1030 23:59:33.804626  240679 start.go:369] acquired machines lock for "running-upgrade-577087" in 18.390977776s
	I1030 23:59:33.804699  240679 start.go:96] Skipping create...Using existing machine configuration
	I1030 23:59:33.804711  240679 fix.go:54] fixHost starting: minikube
	I1030 23:59:33.805212  240679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:59:33.805248  240679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:59:33.824097  240679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I1030 23:59:33.824614  240679 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:59:33.825246  240679 main.go:141] libmachine: Using API Version  1
	I1030 23:59:33.825275  240679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:59:33.825743  240679 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:59:33.825961  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:33.826147  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetState
	I1030 23:59:33.828135  240679 fix.go:102] recreateIfNeeded on running-upgrade-577087: state=Running err=<nil>
	W1030 23:59:33.828154  240679 fix.go:128] unexpected machine state, will restart: <nil>
	I1030 23:59:33.829807  240679 out.go:177] * Updating the running kvm2 "running-upgrade-577087" VM ...
	I1030 23:59:33.831719  240679 machine.go:88] provisioning docker machine ...
	I1030 23:59:33.831752  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:33.832030  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetMachineName
	I1030 23:59:33.832245  240679 buildroot.go:166] provisioning hostname "running-upgrade-577087"
	I1030 23:59:33.832273  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetMachineName
	I1030 23:59:33.832422  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:33.835597  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:33.836149  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:33.836195  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:33.836296  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHPort
	I1030 23:59:33.836537  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:33.836731  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:33.836921  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHUsername
	I1030 23:59:33.837141  240679 main.go:141] libmachine: Using SSH client type: native
	I1030 23:59:33.837674  240679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1030 23:59:33.837703  240679 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-577087 && echo "running-upgrade-577087" | sudo tee /etc/hostname
	I1030 23:59:34.002658  240679 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-577087
	
	I1030 23:59:34.002688  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:34.005920  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.006369  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:34.006406  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.006565  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHPort
	I1030 23:59:34.006810  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:34.007023  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:34.007211  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHUsername
	I1030 23:59:34.007432  240679 main.go:141] libmachine: Using SSH client type: native
	I1030 23:59:34.007951  240679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1030 23:59:34.007983  240679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-577087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-577087/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-577087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1030 23:59:34.135340  240679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1030 23:59:34.135374  240679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1030 23:59:34.135401  240679 buildroot.go:174] setting up certificates
	I1030 23:59:34.135416  240679 provision.go:83] configureAuth start
	I1030 23:59:34.135450  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetMachineName
	I1030 23:59:34.135823  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetIP
	I1030 23:59:34.139505  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.139911  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:34.139945  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.140139  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:34.143127  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.143551  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:34.143583  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.143782  240679 provision.go:138] copyHostCerts
	I1030 23:59:34.143871  240679 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1030 23:59:34.143888  240679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1030 23:59:34.143965  240679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1030 23:59:34.144147  240679 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1030 23:59:34.144165  240679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1030 23:59:34.144205  240679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1030 23:59:34.144305  240679 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1030 23:59:34.144324  240679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1030 23:59:34.144356  240679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1030 23:59:34.144445  240679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-577087 san=[192.168.50.246 192.168.50.246 localhost 127.0.0.1 minikube running-upgrade-577087]
	I1030 23:59:34.354440  240679 provision.go:172] copyRemoteCerts
	I1030 23:59:34.354509  240679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1030 23:59:34.354546  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:34.357688  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.358076  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:34.358139  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.358324  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHPort
	I1030 23:59:34.358584  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:34.358816  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHUsername
	I1030 23:59:34.358995  240679 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/running-upgrade-577087/id_rsa Username:docker}
	I1030 23:59:34.454273  240679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1030 23:59:34.472738  240679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1030 23:59:34.489728  240679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1030 23:59:34.506175  240679 provision.go:86] duration metric: configureAuth took 370.738369ms
	I1030 23:59:34.506209  240679 buildroot.go:189] setting minikube options for container-runtime
	I1030 23:59:34.506421  240679 config.go:182] Loaded profile config "running-upgrade-577087": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1030 23:59:34.506540  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:34.509530  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.509869  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:34.509904  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:34.510219  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHPort
	I1030 23:59:34.510475  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:34.510653  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:34.510855  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHUsername
	I1030 23:59:34.511057  240679 main.go:141] libmachine: Using SSH client type: native
	I1030 23:59:34.511524  240679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1030 23:59:34.511560  240679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1030 23:59:35.231806  240679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1030 23:59:35.231837  240679 machine.go:91] provisioned docker machine in 1.400095066s
	I1030 23:59:35.231851  240679 start.go:300] post-start starting for "running-upgrade-577087" (driver="kvm2")
	I1030 23:59:35.231863  240679 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1030 23:59:35.231888  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:35.232325  240679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1030 23:59:35.232372  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:35.235384  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.235753  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:35.235785  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.235981  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHPort
	I1030 23:59:35.236174  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:35.236371  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHUsername
	I1030 23:59:35.236548  240679 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/running-upgrade-577087/id_rsa Username:docker}
	I1030 23:59:35.329685  240679 ssh_runner.go:195] Run: cat /etc/os-release
	I1030 23:59:35.334040  240679 info.go:137] Remote host: Buildroot 2019.02.7
	I1030 23:59:35.334069  240679 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1030 23:59:35.334153  240679 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1030 23:59:35.334253  240679 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1030 23:59:35.334372  240679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1030 23:59:35.340073  240679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1030 23:59:35.358876  240679 start.go:303] post-start completed in 127.008288ms
	I1030 23:59:35.358902  240679 fix.go:56] fixHost completed within 1.554191444s
	I1030 23:59:35.358929  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:35.362207  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.362618  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:35.362649  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.362846  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHPort
	I1030 23:59:35.363045  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:35.363217  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:35.363334  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHUsername
	I1030 23:59:35.363493  240679 main.go:141] libmachine: Using SSH client type: native
	I1030 23:59:35.363891  240679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1030 23:59:35.363904  240679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1030 23:59:35.507728  240679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698710375.502947367
	
	I1030 23:59:35.507754  240679 fix.go:206] guest clock: 1698710375.502947367
	I1030 23:59:35.507764  240679 fix.go:219] Guest: 2023-10-30 23:59:35.502947367 +0000 UTC Remote: 2023-10-30 23:59:35.35890621 +0000 UTC m=+20.144042336 (delta=144.041157ms)
	I1030 23:59:35.507821  240679 fix.go:190] guest clock delta is within tolerance: 144.041157ms
	I1030 23:59:35.507829  240679 start.go:83] releasing machines lock for "running-upgrade-577087", held for 1.703156467s
	I1030 23:59:35.507863  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:35.508271  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetIP
	I1030 23:59:35.511789  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.512508  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:35.512561  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.512859  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:35.513550  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:35.513815  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .DriverName
	I1030 23:59:35.513914  240679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1030 23:59:35.513976  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:35.514256  240679 ssh_runner.go:195] Run: cat /version.json
	I1030 23:59:35.514335  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHHostname
	I1030 23:59:35.517976  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.518008  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.518512  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:35.518648  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.518715  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:fa:2a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 00:57:35 +0000 UTC Type:0 Mac:52:54:00:44:fa:2a Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-577087 Clientid:01:52:54:00:44:fa:2a}
	I1030 23:59:35.518739  240679 main.go:141] libmachine: (running-upgrade-577087) DBG | domain running-upgrade-577087 has defined IP address 192.168.50.246 and MAC address 52:54:00:44:fa:2a in network minikube-net
	I1030 23:59:35.519046  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHPort
	I1030 23:59:35.519298  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHPort
	I1030 23:59:35.519494  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:35.519535  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHKeyPath
	I1030 23:59:35.519850  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHUsername
	I1030 23:59:35.519894  240679 main.go:141] libmachine: (running-upgrade-577087) Calling .GetSSHUsername
	I1030 23:59:35.520094  240679 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/running-upgrade-577087/id_rsa Username:docker}
	I1030 23:59:35.520683  240679 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/running-upgrade-577087/id_rsa Username:docker}
	W1030 23:59:35.631431  240679 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1030 23:59:35.631520  240679 ssh_runner.go:195] Run: systemctl --version
	I1030 23:59:35.663737  240679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1030 23:59:35.843945  240679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1030 23:59:35.854709  240679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1030 23:59:35.854783  240679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1030 23:59:35.862343  240679 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1030 23:59:35.862366  240679 start.go:472] detecting cgroup driver to use...
	I1030 23:59:35.862437  240679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1030 23:59:35.875555  240679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1030 23:59:35.889616  240679 docker.go:198] disabling cri-docker service (if available) ...
	I1030 23:59:35.889669  240679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1030 23:59:35.910670  240679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1030 23:59:35.922628  240679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1030 23:59:35.934031  240679 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1030 23:59:35.934090  240679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1030 23:59:36.091967  240679 docker.go:214] disabling docker service ...
	I1030 23:59:36.092064  240679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1030 23:59:37.116037  240679 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.023933046s)
	I1030 23:59:37.116133  240679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1030 23:59:37.131897  240679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1030 23:59:37.306573  240679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1030 23:59:37.520733  240679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1030 23:59:37.546923  240679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1030 23:59:37.615882  240679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1030 23:59:37.616011  240679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1030 23:59:37.627211  240679 out.go:177] 
	W1030 23:59:37.628678  240679 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1030 23:59:37.628744  240679 out.go:239] * 
	* 
	W1030 23:59:37.629724  240679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1030 23:59:37.631852  240679 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-577087 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-30 23:59:37.655907352 +0000 UTC m=+3479.905924279
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-577087 -n running-upgrade-577087
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-577087 -n running-upgrade-577087: exit status 4 (320.841094ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1030 23:59:37.935330  241033 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-577087" does not appear in /home/jenkins/minikube-integration/17527-208817/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-577087" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-577087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-577087
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-577087: (1.265871574s)
--- FAIL: TestRunningBinaryUpgrade (155.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (271.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1246365422.exe start -p stopped-upgrade-237143 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1246365422.exe start -p stopped-upgrade-237143 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m16.669151792s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1246365422.exe -p stopped-upgrade-237143 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1246365422.exe -p stopped-upgrade-237143 stop: (1m32.795208532s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-237143 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-237143 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (42.269771556s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-237143] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-237143 in cluster stopped-upgrade-237143
	* Restarting existing kvm2 VM for "stopped-upgrade-237143" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 00:03:29.154465  246118 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:03:29.154580  246118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:03:29.154591  246118 out.go:309] Setting ErrFile to fd 2...
	I1031 00:03:29.154597  246118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:03:29.154801  246118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:03:29.155348  246118 out.go:303] Setting JSON to false
	I1031 00:03:29.156394  246118 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27961,"bootTime":1698682648,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:03:29.156458  246118 start.go:138] virtualization: kvm guest
	I1031 00:03:29.159158  246118 out.go:177] * [stopped-upgrade-237143] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:03:29.160544  246118 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:03:29.160549  246118 notify.go:220] Checking for updates...
	I1031 00:03:29.162044  246118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:03:29.163459  246118 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:03:29.164681  246118 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:03:29.165985  246118 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:03:29.167331  246118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:03:29.169131  246118 config.go:182] Loaded profile config "stopped-upgrade-237143": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1031 00:03:29.169147  246118 start_flags.go:697] config upgrade: Driver=kvm2
	I1031 00:03:29.169156  246118 start_flags.go:709] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df
	I1031 00:03:29.169257  246118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/stopped-upgrade-237143/config.json ...
	I1031 00:03:29.169877  246118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:03:29.169961  246118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:03:29.186244  246118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
	I1031 00:03:29.186661  246118 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:03:29.187246  246118 main.go:141] libmachine: Using API Version  1
	I1031 00:03:29.187272  246118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:03:29.187605  246118 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:03:29.187819  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:03:29.189860  246118 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1031 00:03:29.191011  246118 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:03:29.191324  246118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:03:29.191366  246118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:03:29.209731  246118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38459
	I1031 00:03:29.210240  246118 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:03:29.210894  246118 main.go:141] libmachine: Using API Version  1
	I1031 00:03:29.210925  246118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:03:29.211383  246118 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:03:29.211600  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:03:29.261154  246118 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 00:03:29.262513  246118 start.go:298] selected driver: kvm2
	I1031 00:03:29.262536  246118 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-237143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.180 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1031 00:03:29.262679  246118 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:03:29.263488  246118 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.263610  246118 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:03:29.285528  246118 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:03:29.286040  246118 cni.go:84] Creating CNI manager for ""
	I1031 00:03:29.286062  246118 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1031 00:03:29.286075  246118 start_flags.go:323] config:
	{Name:stopped-upgrade-237143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.180 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1031 00:03:29.286305  246118 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.288257  246118 out.go:177] * Starting control plane node stopped-upgrade-237143 in cluster stopped-upgrade-237143
	I1031 00:03:29.290705  246118 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1031 00:03:29.318105  246118 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1031 00:03:29.318289  246118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/stopped-upgrade-237143/config.json ...
	I1031 00:03:29.318341  246118 cache.go:107] acquiring lock: {Name:mk2c1bd2158b20fe87f0df93928ed44b7086d2b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.318379  246118 cache.go:107] acquiring lock: {Name:mkb343ce6f6fee532319b7f404307089fda8928c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.318457  246118 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1031 00:03:29.318480  246118 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 152.915µs
	I1031 00:03:29.318493  246118 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1031 00:03:29.318477  246118 cache.go:107] acquiring lock: {Name:mk463dc738a6024c485d67e69c733dd1613dbb05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.318515  246118 cache.go:107] acquiring lock: {Name:mk698bbf6ca439d1c8312bd77af7026352b426ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.318537  246118 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1031 00:03:29.318557  246118 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 82.397µs
	I1031 00:03:29.318572  246118 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1031 00:03:29.318577  246118 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1031 00:03:29.318582  246118 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 69.366µs
	I1031 00:03:29.318592  246118 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1031 00:03:29.318592  246118 start.go:365] acquiring machines lock for stopped-upgrade-237143: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:03:29.318607  246118 cache.go:107] acquiring lock: {Name:mk6870915238206bee42523a3c61b0972894e28d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.318597  246118 cache.go:107] acquiring lock: {Name:mk00f1816d1ae25ed19174cc1ed4978ea5624e9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.318663  246118 start.go:369] acquired machines lock for "stopped-upgrade-237143" in 29.59µs
	I1031 00:03:29.318683  246118 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1031 00:03:29.318691  246118 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1031 00:03:29.318462  246118 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1031 00:03:29.318611  246118 cache.go:107] acquiring lock: {Name:mk47cd24dee0f3c893916cdf3d12033bb43118a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.318753  246118 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1031 00:03:29.318771  246118 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 160.69µs
	I1031 00:03:29.318784  246118 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1031 00:03:29.318692  246118 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:03:29.318799  246118 fix.go:54] fixHost starting: minikube
	I1031 00:03:29.318631  246118 cache.go:107] acquiring lock: {Name:mkabdd1e84857a847f5f32881a5f748726556ef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:03:29.318693  246118 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 88.277µs
	I1031 00:03:29.318942  246118 cache.go:115] /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1031 00:03:29.318956  246118 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 327.219µs
	I1031 00:03:29.318971  246118 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1031 00:03:29.318942  246118 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1031 00:03:29.318713  246118 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 343.203µs
	I1031 00:03:29.318985  246118 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1031 00:03:29.318707  246118 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 116.152µs
	I1031 00:03:29.318992  246118 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1031 00:03:29.319000  246118 cache.go:87] Successfully saved all images to host disk.
	I1031 00:03:29.319194  246118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:03:29.319236  246118 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:03:29.337729  246118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I1031 00:03:29.338192  246118 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:03:29.338807  246118 main.go:141] libmachine: Using API Version  1
	I1031 00:03:29.338832  246118 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:03:29.339211  246118 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:03:29.339407  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:03:29.339583  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetState
	I1031 00:03:29.342146  246118 fix.go:102] recreateIfNeeded on stopped-upgrade-237143: state=Stopped err=<nil>
	I1031 00:03:29.342178  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	W1031 00:03:29.342377  246118 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:03:29.344576  246118 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-237143" ...
	I1031 00:03:29.345957  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .Start
	I1031 00:03:29.346178  246118 main.go:141] libmachine: (stopped-upgrade-237143) Ensuring networks are active...
	I1031 00:03:29.347078  246118 main.go:141] libmachine: (stopped-upgrade-237143) Ensuring network default is active
	I1031 00:03:29.347481  246118 main.go:141] libmachine: (stopped-upgrade-237143) Ensuring network minikube-net is active
	I1031 00:03:29.347974  246118 main.go:141] libmachine: (stopped-upgrade-237143) Getting domain xml...
	I1031 00:03:29.348820  246118 main.go:141] libmachine: (stopped-upgrade-237143) Creating domain...
	I1031 00:03:30.878419  246118 main.go:141] libmachine: (stopped-upgrade-237143) Waiting to get IP...
	I1031 00:03:30.879541  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:30.880063  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:30.880146  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:30.880037  246153 retry.go:31] will retry after 224.699622ms: waiting for machine to come up
	I1031 00:03:31.106651  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:31.107240  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:31.107267  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:31.107189  246153 retry.go:31] will retry after 337.664772ms: waiting for machine to come up
	I1031 00:03:31.447045  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:31.447680  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:31.447714  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:31.447607  246153 retry.go:31] will retry after 416.274483ms: waiting for machine to come up
	I1031 00:03:31.865233  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:31.865774  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:31.865811  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:31.865720  246153 retry.go:31] will retry after 526.161299ms: waiting for machine to come up
	I1031 00:03:32.393361  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:32.393938  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:32.393971  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:32.393880  246153 retry.go:31] will retry after 732.465013ms: waiting for machine to come up
	I1031 00:03:33.127894  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:33.128390  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:33.128421  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:33.128309  246153 retry.go:31] will retry after 620.062651ms: waiting for machine to come up
	I1031 00:03:33.750397  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:33.751106  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:33.751137  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:33.751009  246153 retry.go:31] will retry after 721.488474ms: waiting for machine to come up
	I1031 00:03:34.473893  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:34.474575  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:34.474607  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:34.474460  246153 retry.go:31] will retry after 925.822672ms: waiting for machine to come up
	I1031 00:03:35.401860  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:35.402441  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:35.402477  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:35.402367  246153 retry.go:31] will retry after 1.42593921s: waiting for machine to come up
	I1031 00:03:36.829839  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:36.830493  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:36.830517  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:36.830393  246153 retry.go:31] will retry after 1.644790309s: waiting for machine to come up
	I1031 00:03:38.476808  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:38.477592  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:38.477676  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:38.477549  246153 retry.go:31] will retry after 2.709832967s: waiting for machine to come up
	I1031 00:03:41.188714  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:41.189378  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:41.189411  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:41.189336  246153 retry.go:31] will retry after 2.672370067s: waiting for machine to come up
	I1031 00:03:43.864694  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:43.865272  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:43.865302  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:43.865229  246153 retry.go:31] will retry after 4.210719544s: waiting for machine to come up
	I1031 00:03:48.078454  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:48.078997  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:48.079023  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:48.078936  246153 retry.go:31] will retry after 3.5944011s: waiting for machine to come up
	I1031 00:03:51.674917  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:51.675396  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:51.675425  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:51.675343  246153 retry.go:31] will retry after 4.52697253s: waiting for machine to come up
	I1031 00:03:56.204665  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:03:56.205188  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | unable to find current IP address of domain stopped-upgrade-237143 in network minikube-net
	I1031 00:03:56.205223  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | I1031 00:03:56.205130  246153 retry.go:31] will retry after 7.59933013s: waiting for machine to come up
	I1031 00:04:03.806291  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:03.806894  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has current primary IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:03.806924  246118 main.go:141] libmachine: (stopped-upgrade-237143) Found IP for machine: 192.168.50.180
	I1031 00:04:03.806938  246118 main.go:141] libmachine: (stopped-upgrade-237143) Reserving static IP address...
	I1031 00:04:03.807435  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "stopped-upgrade-237143", mac: "52:54:00:fc:f4:56", ip: "192.168.50.180"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:03.807471  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-237143", mac: "52:54:00:fc:f4:56", ip: "192.168.50.180"}
	I1031 00:04:03.807483  246118 main.go:141] libmachine: (stopped-upgrade-237143) Reserved static IP address: 192.168.50.180
	I1031 00:04:03.807520  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | Getting to WaitForSSH function...
	I1031 00:04:03.807558  246118 main.go:141] libmachine: (stopped-upgrade-237143) Waiting for SSH to be available...
	I1031 00:04:03.810105  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:03.810592  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:03.810630  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:03.810784  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | Using SSH client type: external
	I1031 00:04:03.810802  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/stopped-upgrade-237143/id_rsa (-rw-------)
	I1031 00:04:03.810891  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/stopped-upgrade-237143/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:04:03.810925  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | About to run SSH command:
	I1031 00:04:03.810943  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | exit 0
	I1031 00:04:03.944915  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | SSH cmd err, output: <nil>: 
	I1031 00:04:03.945339  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetConfigRaw
	I1031 00:04:03.946278  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetIP
	I1031 00:04:03.949356  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:03.949783  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:03.949822  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:03.950138  246118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/stopped-upgrade-237143/config.json ...
	I1031 00:04:03.950375  246118 machine.go:88] provisioning docker machine ...
	I1031 00:04:03.950399  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:04:03.950667  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetMachineName
	I1031 00:04:03.950877  246118 buildroot.go:166] provisioning hostname "stopped-upgrade-237143"
	I1031 00:04:03.950905  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetMachineName
	I1031 00:04:03.951090  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:03.953709  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:03.954113  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:03.954155  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:03.954299  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHPort
	I1031 00:04:03.954496  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:03.954684  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:03.954873  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHUsername
	I1031 00:04:03.955065  246118 main.go:141] libmachine: Using SSH client type: native
	I1031 00:04:03.955532  246118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1031 00:04:03.955552  246118 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-237143 && echo "stopped-upgrade-237143" | sudo tee /etc/hostname
	I1031 00:04:04.072334  246118 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-237143
	
	I1031 00:04:04.072367  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:04.075359  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.075819  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:04.075870  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.076093  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHPort
	I1031 00:04:04.076310  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:04.076517  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:04.076661  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHUsername
	I1031 00:04:04.076840  246118 main.go:141] libmachine: Using SSH client type: native
	I1031 00:04:04.077215  246118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1031 00:04:04.077235  246118 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-237143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-237143/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-237143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:04:04.193898  246118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:04:04.193927  246118 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:04:04.193946  246118 buildroot.go:174] setting up certificates
	I1031 00:04:04.193955  246118 provision.go:83] configureAuth start
	I1031 00:04:04.193975  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetMachineName
	I1031 00:04:04.194240  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetIP
	I1031 00:04:04.197097  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.197453  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:04.197486  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.197584  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:04.200383  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.200769  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:04.200804  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.200969  246118 provision.go:138] copyHostCerts
	I1031 00:04:04.201044  246118 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:04:04.201065  246118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:04:04.201136  246118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:04:04.201245  246118 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:04:04.201255  246118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:04:04.201283  246118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:04:04.201380  246118 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:04:04.201388  246118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:04:04.201412  246118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:04:04.201483  246118 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-237143 san=[192.168.50.180 192.168.50.180 localhost 127.0.0.1 minikube stopped-upgrade-237143]
	I1031 00:04:04.406616  246118 provision.go:172] copyRemoteCerts
	I1031 00:04:04.406692  246118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:04:04.406718  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:04.409806  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.410165  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:04.410200  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.410427  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHPort
	I1031 00:04:04.410653  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:04.410869  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHUsername
	I1031 00:04:04.411045  246118 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/stopped-upgrade-237143/id_rsa Username:docker}
	I1031 00:04:04.495107  246118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:04:04.510080  246118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1031 00:04:04.525150  246118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:04:04.539413  246118 provision.go:86] duration metric: configureAuth took 345.447923ms
	I1031 00:04:04.539436  246118 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:04:04.539578  246118 config.go:182] Loaded profile config "stopped-upgrade-237143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1031 00:04:04.539661  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:04.542587  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.543076  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:04.543122  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:04.543230  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHPort
	I1031 00:04:04.543426  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:04.543614  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:04.543836  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHUsername
	I1031 00:04:04.544015  246118 main.go:141] libmachine: Using SSH client type: native
	I1031 00:04:04.544351  246118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1031 00:04:04.544373  246118 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:04:10.479948  246118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:04:10.479984  246118 machine.go:91] provisioned docker machine in 6.529594283s
	I1031 00:04:10.479996  246118 start.go:300] post-start starting for "stopped-upgrade-237143" (driver="kvm2")
	I1031 00:04:10.480006  246118 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:04:10.480022  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:04:10.480438  246118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:04:10.480476  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:10.483727  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.484166  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:10.484199  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.484418  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHPort
	I1031 00:04:10.484639  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:10.484826  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHUsername
	I1031 00:04:10.485008  246118 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/stopped-upgrade-237143/id_rsa Username:docker}
	I1031 00:04:10.563972  246118 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:04:10.568516  246118 info.go:137] Remote host: Buildroot 2019.02.7
	I1031 00:04:10.568548  246118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:04:10.568617  246118 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:04:10.568715  246118 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:04:10.568800  246118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:04:10.575144  246118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:04:10.589249  246118 start.go:303] post-start completed in 109.239155ms
	I1031 00:04:10.589273  246118 fix.go:56] fixHost completed within 41.270474414s
	I1031 00:04:10.589299  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:10.592011  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.592323  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:10.592349  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.592548  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHPort
	I1031 00:04:10.592768  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:10.592995  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:10.593186  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHUsername
	I1031 00:04:10.593472  246118 main.go:141] libmachine: Using SSH client type: native
	I1031 00:04:10.593866  246118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1031 00:04:10.593886  246118 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1031 00:04:10.701251  246118 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698710650.639342541
	
	I1031 00:04:10.701277  246118 fix.go:206] guest clock: 1698710650.639342541
	I1031 00:04:10.701293  246118 fix.go:219] Guest: 2023-10-31 00:04:10.639342541 +0000 UTC Remote: 2023-10-31 00:04:10.589277636 +0000 UTC m=+41.493421823 (delta=50.064905ms)
	I1031 00:04:10.701322  246118 fix.go:190] guest clock delta is within tolerance: 50.064905ms
	I1031 00:04:10.701327  246118 start.go:83] releasing machines lock for "stopped-upgrade-237143", held for 41.382648025s
	I1031 00:04:10.701351  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:04:10.701640  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetIP
	I1031 00:04:10.704251  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.704607  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:10.704635  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.704736  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:04:10.705246  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:04:10.705448  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .DriverName
	I1031 00:04:10.705561  246118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:04:10.705602  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:10.705690  246118 ssh_runner.go:195] Run: cat /version.json
	I1031 00:04:10.705719  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHHostname
	I1031 00:04:10.708418  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.708829  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.708865  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:10.708891  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.709002  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHPort
	I1031 00:04:10.709171  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:10.709256  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f4:56", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-31 01:00:10 +0000 UTC Type:0 Mac:52:54:00:fc:f4:56 Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:stopped-upgrade-237143 Clientid:01:52:54:00:fc:f4:56}
	I1031 00:04:10.709299  246118 main.go:141] libmachine: (stopped-upgrade-237143) DBG | domain stopped-upgrade-237143 has defined IP address 192.168.50.180 and MAC address 52:54:00:fc:f4:56 in network minikube-net
	I1031 00:04:10.709347  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHUsername
	I1031 00:04:10.709542  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHPort
	I1031 00:04:10.709544  246118 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/stopped-upgrade-237143/id_rsa Username:docker}
	I1031 00:04:10.709688  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHKeyPath
	I1031 00:04:10.709815  246118 main.go:141] libmachine: (stopped-upgrade-237143) Calling .GetSSHUsername
	I1031 00:04:10.709967  246118 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/stopped-upgrade-237143/id_rsa Username:docker}
	W1031 00:04:10.787207  246118 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1031 00:04:10.787295  246118 ssh_runner.go:195] Run: systemctl --version
	I1031 00:04:10.793130  246118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:04:10.987971  246118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:04:10.994287  246118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:04:10.994371  246118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:04:10.999860  246118 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1031 00:04:10.999881  246118 start.go:472] detecting cgroup driver to use...
	I1031 00:04:10.999941  246118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:04:11.009676  246118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:04:11.018226  246118 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:04:11.018284  246118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:04:11.025650  246118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:04:11.033269  246118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1031 00:04:11.041176  246118 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1031 00:04:11.041237  246118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:04:11.126427  246118 docker.go:214] disabling docker service ...
	I1031 00:04:11.126510  246118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:04:11.136904  246118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:04:11.144894  246118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:04:11.231550  246118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:04:11.323470  246118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:04:11.332521  246118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:04:11.343945  246118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1031 00:04:11.344016  246118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:04:11.352763  246118 out.go:177] 
	W1031 00:04:11.354182  246118 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1031 00:04:11.354207  246118 out.go:239] * 
	* 
	W1031 00:04:11.355104  246118 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 00:04:11.356621  246118 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-237143 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (271.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (68.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-511532 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-511532 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.582198852s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-511532] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-511532 in cluster pause-511532
	* Updating the running kvm2 "pause-511532" VM ...
	* Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-511532" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 23:59:54.644713  241323 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:59:54.644866  241323 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:59:54.644877  241323 out.go:309] Setting ErrFile to fd 2...
	I1030 23:59:54.644888  241323 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:59:54.645086  241323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:59:54.645727  241323 out.go:303] Setting JSON to false
	I1030 23:59:54.646719  241323 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27747,"bootTime":1698682648,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:59:54.646782  241323 start.go:138] virtualization: kvm guest
	I1030 23:59:54.649526  241323 out.go:177] * [pause-511532] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:59:54.651175  241323 out.go:177]   - MINIKUBE_LOCATION=17527
	I1030 23:59:54.652548  241323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:59:54.651264  241323 notify.go:220] Checking for updates...
	I1030 23:59:54.655180  241323 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:59:54.656573  241323 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:59:54.657892  241323 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 23:59:54.659229  241323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 23:59:54.661240  241323 config.go:182] Loaded profile config "pause-511532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:59:54.661845  241323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:59:54.661907  241323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:59:54.677918  241323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46343
	I1030 23:59:54.678363  241323 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:59:54.678934  241323 main.go:141] libmachine: Using API Version  1
	I1030 23:59:54.678955  241323 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:59:54.679341  241323 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:59:54.679543  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1030 23:59:54.679766  241323 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:59:54.680050  241323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:59:54.680083  241323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:59:54.695086  241323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45767
	I1030 23:59:54.695487  241323 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:59:54.695954  241323 main.go:141] libmachine: Using API Version  1
	I1030 23:59:54.695980  241323 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:59:54.696314  241323 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:59:54.696498  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1030 23:59:54.733057  241323 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 23:59:54.734520  241323 start.go:298] selected driver: kvm2
	I1030 23:59:54.734534  241323 start.go:902] validating driver "kvm2" against &{Name:pause-511532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.3 ClusterName:pause-511532 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:59:54.734670  241323 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 23:59:54.735045  241323 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:54.735118  241323 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 23:59:54.750128  241323 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1030 23:59:54.750886  241323 cni.go:84] Creating CNI manager for ""
	I1030 23:59:54.750905  241323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 23:59:54.750919  241323 start_flags.go:323] config:
	{Name:pause-511532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-511532 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:59:54.751158  241323 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:59:54.753286  241323 out.go:177] * Starting control plane node pause-511532 in cluster pause-511532
	I1030 23:59:54.754858  241323 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1030 23:59:54.754907  241323 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1030 23:59:54.754929  241323 cache.go:56] Caching tarball of preloaded images
	I1030 23:59:54.755049  241323 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1030 23:59:54.755073  241323 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1030 23:59:54.755252  241323 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/config.json ...
	I1030 23:59:54.755467  241323 start.go:365] acquiring machines lock for pause-511532: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:00:00.678746  241323 start.go:369] acquired machines lock for "pause-511532" in 5.923248913s
	I1031 00:00:00.678812  241323 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:00:00.678820  241323 fix.go:54] fixHost starting: 
	I1031 00:00:00.679246  241323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:00:00.679287  241323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:00:00.699698  241323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I1031 00:00:00.700286  241323 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:00:00.700869  241323 main.go:141] libmachine: Using API Version  1
	I1031 00:00:00.700896  241323 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:00:00.701486  241323 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:00:00.701657  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1031 00:00:00.701834  241323 main.go:141] libmachine: (pause-511532) Calling .GetState
	I1031 00:00:00.703946  241323 fix.go:102] recreateIfNeeded on pause-511532: state=Running err=<nil>
	W1031 00:00:00.703976  241323 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:00:00.705978  241323 out.go:177] * Updating the running kvm2 "pause-511532" VM ...
	I1031 00:00:00.707455  241323 machine.go:88] provisioning docker machine ...
	I1031 00:00:00.707488  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1031 00:00:00.707700  241323 main.go:141] libmachine: (pause-511532) Calling .GetMachineName
	I1031 00:00:00.707917  241323 buildroot.go:166] provisioning hostname "pause-511532"
	I1031 00:00:00.707942  241323 main.go:141] libmachine: (pause-511532) Calling .GetMachineName
	I1031 00:00:00.708095  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:00.713253  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:00.713812  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:00.713843  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:00.714073  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHPort
	I1031 00:00:00.714486  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:00.714622  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:00.714799  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHUsername
	I1031 00:00:00.714968  241323 main.go:141] libmachine: Using SSH client type: native
	I1031 00:00:00.715290  241323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I1031 00:00:00.715299  241323 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-511532 && echo "pause-511532" | sudo tee /etc/hostname
	I1031 00:00:00.865234  241323 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-511532
	
	I1031 00:00:00.865267  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:00.868870  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:00.869295  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:00.869376  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:00.869690  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHPort
	I1031 00:00:00.869907  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:00.870090  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:00.870257  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHUsername
	I1031 00:00:00.870445  241323 main.go:141] libmachine: Using SSH client type: native
	I1031 00:00:00.870936  241323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I1031 00:00:00.870975  241323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-511532' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-511532/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-511532' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:00:01.015011  241323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:00:01.015092  241323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:00:01.015122  241323 buildroot.go:174] setting up certificates
	I1031 00:00:01.015138  241323 provision.go:83] configureAuth start
	I1031 00:00:01.015166  241323 main.go:141] libmachine: (pause-511532) Calling .GetMachineName
	I1031 00:00:01.015530  241323 main.go:141] libmachine: (pause-511532) Calling .GetIP
	I1031 00:00:01.019514  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:01.020116  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:01.020175  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:01.020491  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:01.023568  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:01.023961  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:01.024014  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:01.024240  241323 provision.go:138] copyHostCerts
	I1031 00:00:01.024353  241323 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:00:01.024387  241323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:00:01.024476  241323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:00:01.024629  241323 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:00:01.024653  241323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:00:01.024686  241323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:00:01.024824  241323 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:00:01.024842  241323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:00:01.024867  241323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:00:01.024953  241323 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.pause-511532 san=[192.168.61.111 192.168.61.111 localhost 127.0.0.1 minikube pause-511532]
	I1031 00:00:01.243766  241323 provision.go:172] copyRemoteCerts
	I1031 00:00:01.243863  241323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:00:01.243900  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:01.247414  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:01.247763  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:01.247796  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:01.248081  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHPort
	I1031 00:00:01.248269  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:01.248478  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHUsername
	I1031 00:00:01.248649  241323 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/pause-511532/id_rsa Username:docker}
	I1031 00:00:01.355092  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:00:01.394019  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1031 00:00:01.433255  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:00:01.471012  241323 provision.go:86] duration metric: configureAuth took 455.844155ms
	I1031 00:00:01.471051  241323 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:00:01.471367  241323 config.go:182] Loaded profile config "pause-511532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:01.471481  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:01.474962  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:01.475458  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:01.475488  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:01.475801  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHPort
	I1031 00:00:01.476037  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:01.476207  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:01.476430  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHUsername
	I1031 00:00:01.476645  241323 main.go:141] libmachine: Using SSH client type: native
	I1031 00:00:01.477019  241323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I1031 00:00:01.477043  241323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:00:07.104066  241323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:00:07.104101  241323 machine.go:91] provisioned docker machine in 6.396630137s
	I1031 00:00:07.104117  241323 start.go:300] post-start starting for "pause-511532" (driver="kvm2")
	I1031 00:00:07.104131  241323 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:00:07.104174  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1031 00:00:07.104554  241323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:00:07.104597  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:07.464578  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.468765  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:07.468809  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.469111  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHPort
	I1031 00:00:07.469370  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:07.469566  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHUsername
	I1031 00:00:07.469779  241323 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/pause-511532/id_rsa Username:docker}
	I1031 00:00:07.672122  241323 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:00:07.676778  241323 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:00:07.676805  241323 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:00:07.676868  241323 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:00:07.676960  241323 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:00:07.677083  241323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:00:07.689316  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:00:07.718209  241323 start.go:303] post-start completed in 614.071337ms
	I1031 00:00:07.718242  241323 fix.go:56] fixHost completed within 7.039420292s
	I1031 00:00:07.718266  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:07.721713  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.722218  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:07.722255  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.722461  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHPort
	I1031 00:00:07.722687  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:07.722861  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:07.723012  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHUsername
	I1031 00:00:07.723232  241323 main.go:141] libmachine: Using SSH client type: native
	I1031 00:00:07.723737  241323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I1031 00:00:07.723763  241323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1031 00:00:07.861186  241323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698710407.857521791
	
	I1031 00:00:07.861217  241323 fix.go:206] guest clock: 1698710407.857521791
	I1031 00:00:07.861228  241323 fix.go:219] Guest: 2023-10-31 00:00:07.857521791 +0000 UTC Remote: 2023-10-31 00:00:07.718245821 +0000 UTC m=+13.137458358 (delta=139.27597ms)
	I1031 00:00:07.861264  241323 fix.go:190] guest clock delta is within tolerance: 139.27597ms
	I1031 00:00:07.861272  241323 start.go:83] releasing machines lock for "pause-511532", held for 7.182484962s
	I1031 00:00:07.861298  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1031 00:00:07.861627  241323 main.go:141] libmachine: (pause-511532) Calling .GetIP
	I1031 00:00:07.872189  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.872684  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:07.872713  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.873192  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1031 00:00:07.879311  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1031 00:00:07.879532  241323 main.go:141] libmachine: (pause-511532) Calling .DriverName
	I1031 00:00:07.879626  241323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:00:07.879663  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:07.880083  241323 ssh_runner.go:195] Run: cat /version.json
	I1031 00:00:07.880105  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHHostname
	I1031 00:00:07.883288  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.883873  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:07.883897  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.884139  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHPort
	I1031 00:00:07.884346  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:07.884570  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHUsername
	I1031 00:00:07.885725  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.886338  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHPort
	I1031 00:00:07.886403  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:07.886434  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:07.889192  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHKeyPath
	I1031 00:00:07.889190  241323 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/pause-511532/id_rsa Username:docker}
	I1031 00:00:07.889406  241323 main.go:141] libmachine: (pause-511532) Calling .GetSSHUsername
	I1031 00:00:07.889581  241323 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/pause-511532/id_rsa Username:docker}
	I1031 00:00:08.227999  241323 ssh_runner.go:195] Run: systemctl --version
	I1031 00:00:08.247940  241323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:00:08.462027  241323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:00:08.469613  241323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:00:08.469707  241323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:00:08.479521  241323 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1031 00:00:08.479553  241323 start.go:472] detecting cgroup driver to use...
	I1031 00:00:08.479644  241323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:00:08.497349  241323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:00:08.519437  241323 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:00:08.519506  241323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:00:08.541893  241323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:00:08.566377  241323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:00:08.819891  241323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:00:09.068665  241323 docker.go:214] disabling docker service ...
	I1031 00:00:09.068752  241323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:00:09.099424  241323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:00:09.119616  241323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:00:09.409703  241323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:00:09.651222  241323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:00:09.680531  241323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:00:09.737988  241323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:00:09.738073  241323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:00:09.760504  241323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:00:09.760585  241323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:00:09.803348  241323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:00:09.829961  241323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:00:09.846541  241323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:00:09.864498  241323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:00:09.879258  241323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:00:09.896519  241323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:00:10.111589  241323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:00:11.584466  241323 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.472832064s)
	I1031 00:00:11.584518  241323 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:00:11.584586  241323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:00:11.619386  241323 start.go:540] Will wait 60s for crictl version
	I1031 00:00:11.619470  241323 ssh_runner.go:195] Run: which crictl
	I1031 00:00:11.641389  241323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:00:11.993435  241323 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:00:11.993537  241323 ssh_runner.go:195] Run: crio --version
	I1031 00:00:12.097162  241323 ssh_runner.go:195] Run: crio --version
	I1031 00:00:12.190149  241323 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:00:12.191750  241323 main.go:141] libmachine: (pause-511532) Calling .GetIP
	I1031 00:00:12.195436  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:12.195891  241323 main.go:141] libmachine: (pause-511532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:3f:54", ip: ""} in network mk-pause-511532: {Iface:virbr3 ExpiryTime:2023-10-31 00:59:05 +0000 UTC Type:0 Mac:52:54:00:7c:3f:54 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:pause-511532 Clientid:01:52:54:00:7c:3f:54}
	I1031 00:00:12.195927  241323 main.go:141] libmachine: (pause-511532) DBG | domain pause-511532 has defined IP address 192.168.61.111 and MAC address 52:54:00:7c:3f:54 in network mk-pause-511532
	I1031 00:00:12.196254  241323 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1031 00:00:12.208493  241323 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:00:12.208613  241323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:00:12.294437  241323 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:00:12.294523  241323 crio.go:415] Images already preloaded, skipping extraction
	I1031 00:00:12.294595  241323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:00:12.373967  241323 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:00:12.373993  241323 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:00:12.374075  241323 ssh_runner.go:195] Run: crio config
	I1031 00:00:12.465347  241323 cni.go:84] Creating CNI manager for ""
	I1031 00:00:12.465375  241323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:00:12.465399  241323 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:00:12.465423  241323 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-511532 NodeName:pause-511532 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:00:12.465609  241323 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-511532"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:00:12.465723  241323 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-511532 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:pause-511532 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:00:12.465786  241323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:00:12.482358  241323 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:00:12.482446  241323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:00:12.498733  241323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1031 00:00:12.533995  241323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:00:12.565812  241323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1031 00:00:12.599489  241323 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I1031 00:00:12.613293  241323 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532 for IP: 192.168.61.111
	I1031 00:00:12.613337  241323 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:00:12.613529  241323 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:00:12.613588  241323 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:00:12.613686  241323 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.key
	I1031 00:00:12.613772  241323 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/apiserver.key.709a4700
	I1031 00:00:12.613840  241323 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/proxy-client.key
	I1031 00:00:12.613990  241323 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:00:12.614029  241323 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:00:12.614044  241323 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:00:12.614087  241323 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:00:12.614121  241323 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:00:12.614151  241323 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:00:12.614202  241323 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:00:12.615035  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:00:12.662240  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:00:12.743580  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:00:12.791584  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:00:12.834800  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:00:12.872148  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:00:12.914467  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:00:12.955486  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:00:12.998888  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:00:13.041690  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:00:13.089021  241323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:00:13.142839  241323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:00:13.174171  241323 ssh_runner.go:195] Run: openssl version
	I1031 00:00:13.188285  241323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:00:13.211617  241323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:00:13.222081  241323 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:00:13.222181  241323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:00:13.234160  241323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:00:13.253620  241323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:00:13.276627  241323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:00:13.286763  241323 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:00:13.286841  241323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:00:13.298856  241323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:00:13.317548  241323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:00:13.339003  241323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:00:13.351556  241323 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:00:13.351627  241323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:00:13.363690  241323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:00:13.394493  241323 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:00:13.412537  241323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:00:13.441104  241323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:00:13.455886  241323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:00:13.462764  241323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:00:13.476405  241323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:00:13.498122  241323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:00:13.518791  241323 kubeadm.go:404] StartCluster: {Name:pause-511532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.3 ClusterName:pause-511532 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:00:13.518945  241323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:00:13.519033  241323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:00:13.593222  241323 cri.go:89] found id: "1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca"
	I1031 00:00:13.593250  241323 cri.go:89] found id: "269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5"
	I1031 00:00:13.593257  241323 cri.go:89] found id: "c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699"
	I1031 00:00:13.593264  241323 cri.go:89] found id: "31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c"
	I1031 00:00:13.593270  241323 cri.go:89] found id: "85ef49f2b9aa27d450ad06b5c16c54a136b39e095a6ea17305984045f83271b2"
	I1031 00:00:13.593276  241323 cri.go:89] found id: "ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea"
	I1031 00:00:13.593283  241323 cri.go:89] found id: "3b7b9c72e060ffbd10c5a6d5f9a45e223551df217619281f14bbe0b19dc55956"
	I1031 00:00:13.593291  241323 cri.go:89] found id: ""
	I1031 00:00:13.593339  241323 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-511532 -n pause-511532
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-511532 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-511532 logs -n 25: (2.05196172s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status kubelet --all                       |                          |         |                |                     |                     |
	|         | --full --no-pager                                    |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat kubelet                                |                          |         |                |                     |                     |
	|         | --no-pager                                           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                          |         |                |                     |                     |
	|         | --full --no-pager                                    |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status docker --all                        |                          |         |                |                     |                     |
	|         | --full --no-pager                                    |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat docker                                 |                          |         |                |                     |                     |
	|         | --no-pager                                           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo docker                         | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | system info                                          |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status cri-docker                          |                          |         |                |                     |                     |
	|         | --all --full --no-pager                              |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |                |                     |                     |
	|         | --no-pager                                           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status containerd                          |                          |         |                |                     |                     |
	|         | --all --full --no-pager                              |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |                |                     |                     |
	|         | --no-pager                                           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | containerd config dump                               |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |                |                     |                     |
	|         | --full --no-pager                                    |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo find                           | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo crio                           | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | config                                               |                          |         |                |                     |                     |
	| delete  | -p cilium-740627                                     | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC | 31 Oct 23 00:00 UTC |
	| start   | -p force-systemd-env-781077                          | force-systemd-env-781077 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | --memory=2048                                        |                          |         |                |                     |                     |
	|         | --alsologtostderr                                    |                          |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                          |         |                |                     |                     |
	|         | --container-runtime=crio                             |                          |         |                |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:00:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:00:48.171759  244037 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:00:48.171913  244037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:00:48.171919  244037 out.go:309] Setting ErrFile to fd 2...
	I1031 00:00:48.171924  244037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:00:48.172085  244037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:00:48.172773  244037 out.go:303] Setting JSON to false
	I1031 00:00:48.174108  244037 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27800,"bootTime":1698682648,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:00:48.174201  244037 start.go:138] virtualization: kvm guest
	I1031 00:00:48.177032  244037 out.go:177] * [force-systemd-env-781077] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:00:48.178565  244037 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:00:48.178608  244037 notify.go:220] Checking for updates...
	I1031 00:00:48.180086  244037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:00:48.181663  244037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:00:48.183174  244037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:00:48.184660  244037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:00:48.186394  244037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1031 00:00:48.188312  244037 config.go:182] Loaded profile config "force-systemd-flag-768768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:48.188577  244037 config.go:182] Loaded profile config "pause-511532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:48.188706  244037 config.go:182] Loaded profile config "stopped-upgrade-237143": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1031 00:00:48.188850  244037 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:00:48.229729  244037 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 00:00:48.231084  244037 start.go:298] selected driver: kvm2
	I1031 00:00:48.231098  244037 start.go:902] validating driver "kvm2" against <nil>
	I1031 00:00:48.231113  244037 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:00:48.231947  244037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:00:48.232106  244037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:00:48.250838  244037 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:00:48.250904  244037 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 00:00:48.251193  244037 start_flags.go:916] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 00:00:48.251272  244037 cni.go:84] Creating CNI manager for ""
	I1031 00:00:48.251287  244037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:00:48.251300  244037 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1031 00:00:48.251311  244037 start_flags.go:323] config:
	{Name:force-systemd-env-781077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-env-781077 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:00:48.251506  244037 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:00:48.253506  244037 out.go:177] * Starting control plane node force-systemd-env-781077 in cluster force-systemd-env-781077
	I1031 00:00:45.287280  241323 pod_ready.go:102] pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace has status "Ready":"False"
	I1031 00:00:47.287636  241323 pod_ready.go:102] pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace has status "Ready":"False"
	I1031 00:00:48.288619  241323 pod_ready.go:92] pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:48.288654  241323 pod_ready.go:81] duration metric: took 7.573102151s waiting for pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:48.288668  241323 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:48.296630  241323 pod_ready.go:92] pod "etcd-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:48.296663  241323 pod_ready.go:81] duration metric: took 7.986966ms waiting for pod "etcd-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:48.296676  241323 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:48.090333  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | domain force-systemd-flag-768768 has defined MAC address 52:54:00:a5:87:a3 in network mk-force-systemd-flag-768768
	I1031 00:00:48.090855  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | unable to find current IP address of domain force-systemd-flag-768768 in network mk-force-systemd-flag-768768
	I1031 00:00:48.091355  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | I1031 00:00:48.090819  241896 retry.go:31] will retry after 4.617170039s: waiting for machine to come up
	I1031 00:00:52.710164  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | domain force-systemd-flag-768768 has defined MAC address 52:54:00:a5:87:a3 in network mk-force-systemd-flag-768768
	I1031 00:00:52.710723  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | unable to find current IP address of domain force-systemd-flag-768768 in network mk-force-systemd-flag-768768
	I1031 00:00:52.710752  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | I1031 00:00:52.710676  241896 retry.go:31] will retry after 5.078680813s: waiting for machine to come up
	I1031 00:00:48.254907  244037 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:00:48.254981  244037 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:00:48.254998  244037 cache.go:56] Caching tarball of preloaded images
	I1031 00:00:48.255116  244037 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:00:48.255158  244037 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:00:48.255283  244037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/force-systemd-env-781077/config.json ...
	I1031 00:00:48.255331  244037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/force-systemd-env-781077/config.json: {Name:mk360ff71c072eeaf375fba748a73ea01ea6388d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:00:48.255532  244037 start.go:365] acquiring machines lock for force-systemd-env-781077: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:00:50.336296  241323 pod_ready.go:102] pod "kube-apiserver-pause-511532" in "kube-system" namespace has status "Ready":"False"
	I1031 00:00:51.833790  241323 pod_ready.go:92] pod "kube-apiserver-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:51.833815  241323 pod_ready.go:81] duration metric: took 3.537130782s waiting for pod "kube-apiserver-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:51.833825  241323 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.356768  241323 pod_ready.go:92] pod "kube-controller-manager-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:53.356797  241323 pod_ready.go:81] duration metric: took 1.522965203s waiting for pod "kube-controller-manager-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.356810  241323 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4gxmp" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.365647  241323 pod_ready.go:92] pod "kube-proxy-4gxmp" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:53.365672  241323 pod_ready.go:81] duration metric: took 8.85477ms waiting for pod "kube-proxy-4gxmp" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.365681  241323 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.484253  241323 pod_ready.go:92] pod "kube-scheduler-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:53.484283  241323 pod_ready.go:81] duration metric: took 118.593688ms waiting for pod "kube-scheduler-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.484294  241323 pod_ready.go:38] duration metric: took 12.777839975s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:00:53.484318  241323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:00:53.498059  241323 ops.go:34] apiserver oom_adj: -16
	I1031 00:00:53.498085  241323 kubeadm.go:640] restartCluster took 39.809089005s
	I1031 00:00:53.498096  241323 kubeadm.go:406] StartCluster complete in 39.979325402s
	I1031 00:00:53.498122  241323 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:00:53.498205  241323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:00:53.498948  241323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:00:53.499189  241323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:00:53.499335  241323 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:00:53.501238  241323 out.go:177] * Enabled addons: 
	I1031 00:00:53.499535  241323 config.go:182] Loaded profile config "pause-511532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:53.499846  241323 kapi.go:59] client config for pause-511532: &rest.Config{Host:"https://192.168.61.111:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 00:00:53.502486  241323 addons.go:502] enable addons completed in 3.165218ms: enabled=[]
	I1031 00:00:53.505152  241323 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-511532" context rescaled to 1 replicas
	I1031 00:00:53.505188  241323 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:00:53.506665  241323 out.go:177] * Verifying Kubernetes components...
	I1031 00:00:53.508024  241323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:00:53.631050  241323 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 00:00:53.631067  241323 node_ready.go:35] waiting up to 6m0s for node "pause-511532" to be "Ready" ...
	I1031 00:00:53.682336  241323 node_ready.go:49] node "pause-511532" has status "Ready":"True"
	I1031 00:00:53.682358  241323 node_ready.go:38] duration metric: took 51.266139ms waiting for node "pause-511532" to be "Ready" ...
	I1031 00:00:53.682368  241323 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:00:53.885474  241323 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:54.282890  241323 pod_ready.go:92] pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:54.282926  241323 pod_ready.go:81] duration metric: took 397.425687ms waiting for pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:54.282941  241323 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:54.683833  241323 pod_ready.go:92] pod "etcd-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:54.683866  241323 pod_ready.go:81] duration metric: took 400.915163ms waiting for pod "etcd-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:54.683881  241323 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.083104  241323 pod_ready.go:92] pod "kube-apiserver-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:55.083129  241323 pod_ready.go:81] duration metric: took 399.240999ms waiting for pod "kube-apiserver-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.083143  241323 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.532447  241323 pod_ready.go:92] pod "kube-controller-manager-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:55.532491  241323 pod_ready.go:81] duration metric: took 449.326123ms waiting for pod "kube-controller-manager-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.532507  241323 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gxmp" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.882266  241323 pod_ready.go:92] pod "kube-proxy-4gxmp" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:55.882306  241323 pod_ready.go:81] duration metric: took 349.788555ms waiting for pod "kube-proxy-4gxmp" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.882322  241323 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:56.282454  241323 pod_ready.go:92] pod "kube-scheduler-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:56.282489  241323 pod_ready.go:81] duration metric: took 400.154756ms waiting for pod "kube-scheduler-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:56.282502  241323 pod_ready.go:38] duration metric: took 2.600123556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:00:56.282543  241323 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:00:56.282600  241323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:00:56.299145  241323 api_server.go:72] duration metric: took 2.793904036s to wait for apiserver process to appear ...
	I1031 00:00:56.299171  241323 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:00:56.299192  241323 api_server.go:253] Checking apiserver healthz at https://192.168.61.111:8443/healthz ...
	I1031 00:00:56.307924  241323 api_server.go:279] https://192.168.61.111:8443/healthz returned 200:
	ok
	I1031 00:00:56.309919  241323 api_server.go:141] control plane version: v1.28.3
	I1031 00:00:56.309938  241323 api_server.go:131] duration metric: took 10.759917ms to wait for apiserver health ...
	I1031 00:00:56.309956  241323 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:00:56.485450  241323 system_pods.go:59] 6 kube-system pods found
	I1031 00:00:56.485486  241323 system_pods.go:61] "coredns-5dd5756b68-zrwts" [6dfc7640-e0b2-4e6e-bee4-6d3503590092] Running
	I1031 00:00:56.485494  241323 system_pods.go:61] "etcd-pause-511532" [485b224f-e887-44de-b3ec-83fe2c8420d7] Running
	I1031 00:00:56.485502  241323 system_pods.go:61] "kube-apiserver-pause-511532" [02c3a984-af0d-4c48-8b52-6a621539ec5b] Running
	I1031 00:00:56.485507  241323 system_pods.go:61] "kube-controller-manager-pause-511532" [3e93ef15-ac7e-4a87-a65d-c70ab4d04007] Running
	I1031 00:00:56.485519  241323 system_pods.go:61] "kube-proxy-4gxmp" [8e217fd5-df8f-442d-a8e2-f60321b379b3] Running
	I1031 00:00:56.485527  241323 system_pods.go:61] "kube-scheduler-pause-511532" [1482fc4f-80dc-4a54-967a-0da3429afc55] Running
	I1031 00:00:56.485542  241323 system_pods.go:74] duration metric: took 175.573077ms to wait for pod list to return data ...
	I1031 00:00:56.485561  241323 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:00:56.685433  241323 default_sa.go:45] found service account: "default"
	I1031 00:00:56.685480  241323 default_sa.go:55] duration metric: took 199.910936ms for default service account to be created ...
	I1031 00:00:56.685491  241323 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:00:56.885942  241323 system_pods.go:86] 6 kube-system pods found
	I1031 00:00:56.885973  241323 system_pods.go:89] "coredns-5dd5756b68-zrwts" [6dfc7640-e0b2-4e6e-bee4-6d3503590092] Running
	I1031 00:00:56.885981  241323 system_pods.go:89] "etcd-pause-511532" [485b224f-e887-44de-b3ec-83fe2c8420d7] Running
	I1031 00:00:56.885987  241323 system_pods.go:89] "kube-apiserver-pause-511532" [02c3a984-af0d-4c48-8b52-6a621539ec5b] Running
	I1031 00:00:56.885993  241323 system_pods.go:89] "kube-controller-manager-pause-511532" [3e93ef15-ac7e-4a87-a65d-c70ab4d04007] Running
	I1031 00:00:56.886004  241323 system_pods.go:89] "kube-proxy-4gxmp" [8e217fd5-df8f-442d-a8e2-f60321b379b3] Running
	I1031 00:00:56.886010  241323 system_pods.go:89] "kube-scheduler-pause-511532" [1482fc4f-80dc-4a54-967a-0da3429afc55] Running
	I1031 00:00:56.886019  241323 system_pods.go:126] duration metric: took 200.52086ms to wait for k8s-apps to be running ...
	I1031 00:00:56.886028  241323 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:00:56.886080  241323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:00:56.902609  241323 system_svc.go:56] duration metric: took 16.570785ms WaitForService to wait for kubelet.
	I1031 00:00:56.902643  241323 kubeadm.go:581] duration metric: took 3.397409967s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:00:56.902667  241323 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:00:57.084901  241323 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:00:57.085033  241323 node_conditions.go:123] node cpu capacity is 2
	I1031 00:00:57.085058  241323 node_conditions.go:105] duration metric: took 182.384315ms to run NodePressure ...
	I1031 00:00:57.085100  241323 start.go:228] waiting for startup goroutines ...
	I1031 00:00:57.085111  241323 start.go:233] waiting for cluster config update ...
	I1031 00:00:57.085121  241323 start.go:242] writing updated cluster config ...
	I1031 00:00:57.085553  241323 ssh_runner.go:195] Run: rm -f paused
	I1031 00:00:57.139405  241323 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:00:57.141924  241323 out.go:177] * Done! kubectl is now configured to use "pause-511532" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-30 23:59:02 UTC, ends at Tue 2023-10-31 00:00:58 UTC. --
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.192929665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698710458192913455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=cfb150bd-b9bd-4d5e-989c-1d0dd11f22e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.193358050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2c7c914c-0dc2-42e2-aa2b-a170999ec8c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.193448726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2c7c914c-0dc2-42e2-aa2b-a170999ec8c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.193969720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698710439614295872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772,PodSandboxId:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698710433543986564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698710433510389068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f
5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c,PodSandboxId:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698710429682048410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f,PodSandboxId:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698710429673249372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642
211b1701,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286,PodSandboxId:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698710425866989883,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: af0758db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1698710413764295462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container
.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1698710413461581444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca,PodSandboxId:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698710409804532798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5,PodSandboxId:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698710409311306105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17
cb0,},Annotations:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699,PodSandboxId:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1e44fd5466698121ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698710409204282434,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,},Annotations:map[string
]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c,PodSandboxId:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698710394003069536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]string{io.kubernetes.container.hash: af0758db,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea,PodSandboxId:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698710393524989842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,},Annotations:map[string]string{io.kubernetes.container.hash: 811245ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2c7c914c-0dc2-42e2-aa2b-a170999ec8c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.256408106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7e60093b-c9ff-449e-a5f7-45d4d796d0a6 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.256473245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7e60093b-c9ff-449e-a5f7-45d4d796d0a6 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.257738988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=adc05f10-09f0-41ec-b457-3214edd39d53 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.258317221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698710458258300376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=adc05f10-09f0-41ec-b457-3214edd39d53 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.259460993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ccdbb367-0dc9-4d94-b892-d3285500addc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.259558949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ccdbb367-0dc9-4d94-b892-d3285500addc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.260033154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698710439614295872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772,PodSandboxId:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698710433543986564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698710433510389068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f
5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c,PodSandboxId:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698710429682048410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f,PodSandboxId:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698710429673249372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642
211b1701,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286,PodSandboxId:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698710425866989883,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: af0758db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1698710413764295462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container
.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1698710413461581444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca,PodSandboxId:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698710409804532798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5,PodSandboxId:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698710409311306105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17
cb0,},Annotations:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699,PodSandboxId:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1e44fd5466698121ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698710409204282434,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,},Annotations:map[string
]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c,PodSandboxId:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698710394003069536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]string{io.kubernetes.container.hash: af0758db,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea,PodSandboxId:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698710393524989842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,},Annotations:map[string]string{io.kubernetes.container.hash: 811245ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ccdbb367-0dc9-4d94-b892-d3285500addc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.308241400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8c6ee2d2-ba1e-4d0b-baff-cc87881d963b name=/runtime.v1.RuntimeService/Version
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.308300779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8c6ee2d2-ba1e-4d0b-baff-cc87881d963b name=/runtime.v1.RuntimeService/Version
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.310222315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1d0d12f9-7fa3-4e25-8aa5-74566f7cf494 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.310687859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698710458310662839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=1d0d12f9-7fa3-4e25-8aa5-74566f7cf494 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.311463756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd778206-44ed-4347-98af-f6f92177a5ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.311571615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd778206-44ed-4347-98af-f6f92177a5ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.312125003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698710439614295872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772,PodSandboxId:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698710433543986564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698710433510389068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f
5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c,PodSandboxId:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698710429682048410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f,PodSandboxId:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698710429673249372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642
211b1701,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286,PodSandboxId:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698710425866989883,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: af0758db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1698710413764295462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container
.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1698710413461581444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca,PodSandboxId:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698710409804532798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5,PodSandboxId:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698710409311306105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17
cb0,},Annotations:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699,PodSandboxId:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1e44fd5466698121ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698710409204282434,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,},Annotations:map[string
]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c,PodSandboxId:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698710394003069536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]string{io.kubernetes.container.hash: af0758db,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea,PodSandboxId:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698710393524989842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,},Annotations:map[string]string{io.kubernetes.container.hash: 811245ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd778206-44ed-4347-98af-f6f92177a5ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.365344640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c70c7db2-3f7c-4de2-afa0-1d37c14f0846 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.365451872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c70c7db2-3f7c-4de2-afa0-1d37c14f0846 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.367931940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dc0c99f4-bb4a-4dbf-af97-2dbb13f63002 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.368355412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698710458368341898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=dc0c99f4-bb4a-4dbf-af97-2dbb13f63002 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.369507448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1816a685-e92c-4fab-adef-b0f29712baad name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.369605305Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1816a685-e92c-4fab-adef-b0f29712baad name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:00:58 pause-511532 crio[2596]: time="2023-10-31 00:00:58.370110961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698710439614295872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772,PodSandboxId:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698710433543986564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698710433510389068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f
5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c,PodSandboxId:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698710429682048410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f,PodSandboxId:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698710429673249372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642
211b1701,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286,PodSandboxId:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698710425866989883,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: af0758db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1698710413764295462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container
.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1698710413461581444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca,PodSandboxId:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698710409804532798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5,PodSandboxId:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698710409311306105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17
cb0,},Annotations:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699,PodSandboxId:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1e44fd5466698121ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698710409204282434,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,},Annotations:map[string
]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c,PodSandboxId:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698710394003069536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]string{io.kubernetes.container.hash: af0758db,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea,PodSandboxId:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698710393524989842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,},Annotations:map[string]string{io.kubernetes.container.hash: 811245ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1816a685-e92c-4fab-adef-b0f29712baad name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4fdf4edb1b322       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago       Running             coredns                   2                   4bd39095007b5       coredns-5dd5756b68-zrwts
	6c53374fd13a5       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   24 seconds ago       Running             kube-scheduler            2                   1e55730dc1b07       kube-scheduler-pause-511532
	96e1476f7c8ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago       Running             etcd                      2                   0b41c31d95552       etcd-pause-511532
	bc40a0a2d0c66       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   28 seconds ago       Running             kube-apiserver            2                   0f0a2dd4d90c6       kube-apiserver-pause-511532
	30c9e1047e07f       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   28 seconds ago       Running             kube-controller-manager   2                   f90a8cd4afbad       kube-controller-manager-pause-511532
	f8d5d2ac87065       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   32 seconds ago       Running             kube-proxy                1                   c8fba2f1bcc53       kube-proxy-4gxmp
	c51cabffea83f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   44 seconds ago       Exited              coredns                   1                   4bd39095007b5       coredns-5dd5756b68-zrwts
	96cc1b4a1a945       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   44 seconds ago       Exited              etcd                      1                   0b41c31d95552       etcd-pause-511532
	1dbed496ab468       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   48 seconds ago       Exited              kube-scheduler            1                   cd00f79645242       kube-scheduler-pause-511532
	269d4de2074d8       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   49 seconds ago       Exited              kube-apiserver            1                   200bbf319b412       kube-apiserver-pause-511532
	c1f16d3175523       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   49 seconds ago       Exited              kube-controller-manager   1                   1c4f3a5357370       kube-controller-manager-pause-511532
	31904c72f210d       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   About a minute ago   Exited              kube-proxy                0                   41348674a41b1       kube-proxy-4gxmp
	ca384a40b7cbf       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   046d87fa36481       coredns-5dd5756b68-blsnn
	
	* 
	* ==> coredns [4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35470 - 40532 "HINFO IN 4732493365500966030.5342294527962623447. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00891906s
	
	* 
	* ==> coredns [c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34364 - 56213 "HINFO IN 5845250037702485935.8808534765531650410. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012978982s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-511532
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-511532
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=pause-511532
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_30T23_59_40_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-511532
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 00:00:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:00:39 +0000   Mon, 30 Oct 2023 23:59:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:00:39 +0000   Mon, 30 Oct 2023 23:59:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:00:39 +0000   Mon, 30 Oct 2023 23:59:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:00:39 +0000   Mon, 30 Oct 2023 23:59:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.111
	  Hostname:    pause-511532
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ea31e160b7c4d8c8de508a12fface0c
	  System UUID:                3ea31e16-0b7c-4d8c-8de5-08a12fface0c
	  Boot ID:                    3a72bca7-182f-4b62-b705-ba4acf68d404
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-zrwts                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 etcd-pause-511532                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-pause-511532             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-pause-511532    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-4gxmp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-pause-511532             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s (x8 over 89s)  kubelet          Node pause-511532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 89s)  kubelet          Node pause-511532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 89s)  kubelet          Node pause-511532 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-511532 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-511532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-511532 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                78s                kubelet          Node pause-511532 status is now: NodeReady
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           68s                node-controller  Node pause-511532 event: Registered Node pause-511532 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25s (x8 over 26s)  kubelet          Node pause-511532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 26s)  kubelet          Node pause-511532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 26s)  kubelet          Node pause-511532 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7s                 node-controller  Node pause-511532 event: Registered Node pause-511532 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067399] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.635892] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct30 23:59] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.173503] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.228875] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.742354] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.130505] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.201490] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.167083] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.278895] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +10.722209] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +10.405774] systemd-fstab-generator[1267]: Ignoring "noauto" for root device
	[Oct31 00:00] kauditd_printk_skb: 24 callbacks suppressed
	[  +1.207129] systemd-fstab-generator[2351]: Ignoring "noauto" for root device
	[  +0.254962] systemd-fstab-generator[2362]: Ignoring "noauto" for root device
	[  +0.290110] systemd-fstab-generator[2388]: Ignoring "noauto" for root device
	[  +0.297301] systemd-fstab-generator[2445]: Ignoring "noauto" for root device
	[  +0.471559] systemd-fstab-generator[2489]: Ignoring "noauto" for root device
	[ +21.951069] systemd-fstab-generator[3360]: Ignoring "noauto" for root device
	[  +7.495702] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.673178] hrtimer: interrupt took 2972624 ns
	
	* 
	* ==> etcd [96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008] <==
	* {"level":"info","ts":"2023-10-31T00:00:14.648645Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:15.974133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-31T00:00:15.974357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-31T00:00:15.974418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 received MsgPreVoteResp from a698de7cf8a0ada7 at term 2"}
	{"level":"info","ts":"2023-10-31T00:00:15.974464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became candidate at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:15.974488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 received MsgVoteResp from a698de7cf8a0ada7 at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:15.974515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became leader at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:15.974541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a698de7cf8a0ada7 elected leader a698de7cf8a0ada7 at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:15.982815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:00:15.983903Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a698de7cf8a0ada7","local-member-attributes":"{Name:pause-511532 ClientURLs:[https://192.168.61.111:2379]}","request-path":"/0/members/a698de7cf8a0ada7/attributes","cluster-id":"e340cdbee7b26912","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T00:00:15.98431Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:00:15.984651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.111:2379"}
	{"level":"info","ts":"2023-10-31T00:00:15.985675Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T00:00:15.985732Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T00:00:15.985986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T00:00:29.028541Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-31T00:00:29.028627Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-511532","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.111:2380"],"advertise-client-urls":["https://192.168.61.111:2379"]}
	{"level":"warn","ts":"2023-10-31T00:00:29.028846Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-31T00:00:29.029186Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-31T00:00:29.030973Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.111:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-31T00:00:29.031022Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.111:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-31T00:00:29.031309Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a698de7cf8a0ada7","current-leader-member-id":"a698de7cf8a0ada7"}
	{"level":"info","ts":"2023-10-31T00:00:29.036255Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:29.036495Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:29.036567Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-511532","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.111:2380"],"advertise-client-urls":["https://192.168.61.111:2379"]}
	
	* 
	* ==> etcd [96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8] <==
	* {"level":"info","ts":"2023-10-31T00:00:35.380386Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-31T00:00:35.380397Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-31T00:00:35.380689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 switched to configuration voters=(12004589435084647847)"}
	{"level":"info","ts":"2023-10-31T00:00:35.38089Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e340cdbee7b26912","local-member-id":"a698de7cf8a0ada7","added-peer-id":"a698de7cf8a0ada7","added-peer-peer-urls":["https://192.168.61.111:2380"]}
	{"level":"info","ts":"2023-10-31T00:00:35.381055Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e340cdbee7b26912","local-member-id":"a698de7cf8a0ada7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:00:35.381125Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:00:35.384454Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-31T00:00:35.385015Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a698de7cf8a0ada7","initial-advertise-peer-urls":["https://192.168.61.111:2380"],"listen-peer-urls":["https://192.168.61.111:2380"],"advertise-client-urls":["https://192.168.61.111:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.111:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T00:00:35.385087Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T00:00:35.385404Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:35.385464Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:36.749092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:36.749448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:36.749564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 received MsgPreVoteResp from a698de7cf8a0ada7 at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:36.749658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-31T00:00:36.74971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 received MsgVoteResp from a698de7cf8a0ada7 at term 4"}
	{"level":"info","ts":"2023-10-31T00:00:36.749852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became leader at term 4"}
	{"level":"info","ts":"2023-10-31T00:00:36.749925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a698de7cf8a0ada7 elected leader a698de7cf8a0ada7 at term 4"}
	{"level":"info","ts":"2023-10-31T00:00:36.757347Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:00:36.757357Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a698de7cf8a0ada7","local-member-attributes":"{Name:pause-511532 ClientURLs:[https://192.168.61.111:2379]}","request-path":"/0/members/a698de7cf8a0ada7/attributes","cluster-id":"e340cdbee7b26912","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T00:00:36.757718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:00:36.75897Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.111:2379"}
	{"level":"info","ts":"2023-10-31T00:00:36.759415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T00:00:36.759661Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T00:00:36.759712Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:00:58 up 2 min,  0 users,  load average: 1.47, 0.61, 0.22
	Linux pause-511532 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5] <==
	* 
	* 
	* ==> kube-apiserver [bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c] <==
	* I1031 00:00:38.723606       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1031 00:00:38.723722       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1031 00:00:38.723743       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1031 00:00:38.924208       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 00:00:38.924346       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 00:00:38.926184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1031 00:00:38.938197       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 00:00:38.943053       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1031 00:00:38.953991       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1031 00:00:38.954529       1 aggregator.go:166] initial CRD sync complete...
	I1031 00:00:38.954619       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 00:00:38.954669       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 00:00:38.954712       1 cache.go:39] Caches are synced for autoregister controller
	I1031 00:00:38.962943       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1031 00:00:38.963000       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1031 00:00:38.979019       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1031 00:00:39.019577       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1031 00:00:39.665231       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 00:00:40.491714       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 00:00:40.511876       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 00:00:40.581647       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 00:00:40.662048       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 00:00:40.676026       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 00:00:51.664426       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 00:00:51.714657       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f] <==
	* I1031 00:00:51.364929       1 event.go:307] "Event occurred" object="pause-511532" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-511532 event: Registered Node pause-511532 in Controller"
	I1031 00:00:51.388006       1 shared_informer.go:318] Caches are synced for HPA
	I1031 00:00:51.393542       1 shared_informer.go:318] Caches are synced for GC
	I1031 00:00:51.393588       1 shared_informer.go:318] Caches are synced for PVC protection
	I1031 00:00:51.394869       1 shared_informer.go:318] Caches are synced for deployment
	I1031 00:00:51.402812       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1031 00:00:51.408428       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1031 00:00:51.410100       1 shared_informer.go:318] Caches are synced for stateful set
	I1031 00:00:51.413939       1 shared_informer.go:318] Caches are synced for ephemeral
	I1031 00:00:51.418124       1 shared_informer.go:318] Caches are synced for attach detach
	I1031 00:00:51.420010       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1031 00:00:51.426834       1 shared_informer.go:318] Caches are synced for endpoint
	I1031 00:00:51.431997       1 shared_informer.go:318] Caches are synced for resource quota
	I1031 00:00:51.432060       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1031 00:00:51.437013       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1031 00:00:51.438414       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1031 00:00:51.439497       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1031 00:00:51.440725       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1031 00:00:51.446478       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1031 00:00:51.516542       1 shared_informer.go:318] Caches are synced for resource quota
	I1031 00:00:51.626046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="217.484204ms"
	I1031 00:00:51.626648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="285.457µs"
	I1031 00:00:51.851402       1 shared_informer.go:318] Caches are synced for garbage collector
	I1031 00:00:51.910445       1 shared_informer.go:318] Caches are synced for garbage collector
	I1031 00:00:51.910547       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699] <==
	* 
	* 
	* ==> kube-proxy [31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c] <==
	* I1030 23:59:54.255850       1 server_others.go:69] "Using iptables proxy"
	I1030 23:59:54.273475       1 node.go:141] Successfully retrieved node IP: 192.168.61.111
	I1030 23:59:54.349114       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1030 23:59:54.349254       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 23:59:54.371328       1 server_others.go:152] "Using iptables Proxier"
	I1030 23:59:54.372314       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1030 23:59:54.376847       1 server.go:846] "Version info" version="v1.28.3"
	I1030 23:59:54.376982       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 23:59:54.381348       1 config.go:188] "Starting service config controller"
	I1030 23:59:54.382251       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1030 23:59:54.382390       1 config.go:315] "Starting node config controller"
	I1030 23:59:54.382547       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1030 23:59:54.384447       1 config.go:97] "Starting endpoint slice config controller"
	I1030 23:59:54.401257       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1030 23:59:54.483252       1 shared_informer.go:318] Caches are synced for node config
	I1030 23:59:54.483330       1 shared_informer.go:318] Caches are synced for service config
	I1030 23:59:54.501718       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286] <==
	* I1031 00:00:26.029290       1 server_others.go:69] "Using iptables proxy"
	E1031 00:00:26.032074       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-511532": dial tcp 192.168.61.111:8443: connect: connection refused
	E1031 00:00:27.192310       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-511532": dial tcp 192.168.61.111:8443: connect: connection refused
	E1031 00:00:29.295171       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-511532": dial tcp 192.168.61.111:8443: connect: connection refused
	I1031 00:00:38.985403       1 node.go:141] Successfully retrieved node IP: 192.168.61.111
	I1031 00:00:39.111935       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 00:00:39.111999       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 00:00:39.126019       1 server_others.go:152] "Using iptables Proxier"
	I1031 00:00:39.126141       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 00:00:39.126867       1 server.go:846] "Version info" version="v1.28.3"
	I1031 00:00:39.126927       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:00:39.130668       1 config.go:315] "Starting node config controller"
	I1031 00:00:39.130723       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 00:00:39.131005       1 config.go:188] "Starting service config controller"
	I1031 00:00:39.131072       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 00:00:39.131167       1 config.go:97] "Starting endpoint slice config controller"
	I1031 00:00:39.131230       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 00:00:39.231538       1 shared_informer.go:318] Caches are synced for node config
	I1031 00:00:39.231637       1 shared_informer.go:318] Caches are synced for service config
	I1031 00:00:39.231524       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca] <==
	* 
	* 
	* ==> kube-scheduler [6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772] <==
	* I1031 00:00:35.911526       1 serving.go:348] Generated self-signed cert in-memory
	I1031 00:00:39.049835       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1031 00:00:39.049975       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:00:39.093667       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1031 00:00:39.094104       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1031 00:00:39.094174       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1031 00:00:39.094211       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1031 00:00:39.106690       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1031 00:00:39.109959       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 00:00:39.107092       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1031 00:00:39.110327       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1031 00:00:39.195272       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1031 00:00:39.210258       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 00:00:39.210883       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-30 23:59:02 UTC, ends at Tue 2023-10-31 00:00:59 UTC. --
	Oct 31 00:00:33 pause-511532 kubelet[3366]: E1031 00:00:33.412070    3366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-511532&limit=500&resourceVersion=0": dial tcp 192.168.61.111:8443: connect: connection refused
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.456349    3366 scope.go:117] "RemoveContainer" containerID="96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.458872    3366 scope.go:117] "RemoveContainer" containerID="269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.460419    3366 scope.go:117] "RemoveContainer" containerID="c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.463186    3366 scope.go:117] "RemoveContainer" containerID="1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: W1031 00:00:33.575580    3366 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.111:8443: connect: connection refused
	Oct 31 00:00:33 pause-511532 kubelet[3366]: E1031 00:00:33.575690    3366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.111:8443: connect: connection refused
	Oct 31 00:00:33 pause-511532 kubelet[3366]: E1031 00:00:33.722087    3366 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-511532?timeout=10s\": dial tcp 192.168.61.111:8443: connect: connection refused" interval="1.6s"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.825032    3366 kubelet_node_status.go:70] "Attempting to register node" node="pause-511532"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: E1031 00:00:33.825902    3366 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.111:8443: connect: connection refused" node="pause-511532"
	Oct 31 00:00:35 pause-511532 kubelet[3366]: I1031 00:00:35.428596    3366 kubelet_node_status.go:70] "Attempting to register node" node="pause-511532"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.010683    3366 kubelet_node_status.go:108] "Node was previously registered" node="pause-511532"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.010874    3366 kubelet_node_status.go:73] "Successfully registered node" node="pause-511532"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.013068    3366 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.014463    3366 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.274115    3366 apiserver.go:52] "Watching apiserver"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.278572    3366 topology_manager.go:215] "Topology Admit Handler" podUID="f7c35ba2-5c5e-4908-8567-dff97d6abe21" podNamespace="kube-system" podName="coredns-5dd5756b68-blsnn"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.278900    3366 topology_manager.go:215] "Topology Admit Handler" podUID="6dfc7640-e0b2-4e6e-bee4-6d3503590092" podNamespace="kube-system" podName="coredns-5dd5756b68-zrwts"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.279044    3366 topology_manager.go:215] "Topology Admit Handler" podUID="8e217fd5-df8f-442d-a8e2-f60321b379b3" podNamespace="kube-system" podName="kube-proxy-4gxmp"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.305692    3366 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.389537    3366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e217fd5-df8f-442d-a8e2-f60321b379b3-xtables-lock\") pod \"kube-proxy-4gxmp\" (UID: \"8e217fd5-df8f-442d-a8e2-f60321b379b3\") " pod="kube-system/kube-proxy-4gxmp"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.389625    3366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e217fd5-df8f-442d-a8e2-f60321b379b3-lib-modules\") pod \"kube-proxy-4gxmp\" (UID: \"8e217fd5-df8f-442d-a8e2-f60321b379b3\") " pod="kube-system/kube-proxy-4gxmp"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.584094    3366 scope.go:117] "RemoveContainer" containerID="c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b"
	Oct 31 00:00:40 pause-511532 kubelet[3366]: I1031 00:00:40.450989    3366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f7c35ba2-5c5e-4908-8567-dff97d6abe21" path="/var/lib/kubelet/pods/f7c35ba2-5c5e-4908-8567-dff97d6abe21/volumes"
	Oct 31 00:00:48 pause-511532 kubelet[3366]: I1031 00:00:48.102133    3366 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-511532 -n pause-511532
helpers_test.go:261: (dbg) Run:  kubectl --context pause-511532 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-511532 -n pause-511532
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-511532 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-511532 logs -n 25: (2.296897411s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status kubelet --all                       |                          |         |                |                     |                     |
	|         | --full --no-pager                                    |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat kubelet                                |                          |         |                |                     |                     |
	|         | --no-pager                                           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                          |         |                |                     |                     |
	|         | --full --no-pager                                    |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status docker --all                        |                          |         |                |                     |                     |
	|         | --full --no-pager                                    |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat docker                                 |                          |         |                |                     |                     |
	|         | --no-pager                                           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo docker                         | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | system info                                          |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status cri-docker                          |                          |         |                |                     |                     |
	|         | --all --full --no-pager                              |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |                |                     |                     |
	|         | --no-pager                                           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status containerd                          |                          |         |                |                     |                     |
	|         | --all --full --no-pager                              |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |                |                     |                     |
	|         | --no-pager                                           |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo cat                            | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | containerd config dump                               |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |                |                     |                     |
	|         | --full --no-pager                                    |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo                                | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo find                           | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |                |                     |                     |
	| ssh     | -p cilium-740627 sudo crio                           | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | config                                               |                          |         |                |                     |                     |
	| delete  | -p cilium-740627                                     | cilium-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC | 31 Oct 23 00:00 UTC |
	| start   | -p force-systemd-env-781077                          | force-systemd-env-781077 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:00 UTC |                     |
	|         | --memory=2048                                        |                          |         |                |                     |                     |
	|         | --alsologtostderr                                    |                          |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                          |         |                |                     |                     |
	|         | --container-runtime=crio                             |                          |         |                |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:00:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:00:48.171759  244037 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:00:48.171913  244037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:00:48.171919  244037 out.go:309] Setting ErrFile to fd 2...
	I1031 00:00:48.171924  244037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:00:48.172085  244037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:00:48.172773  244037 out.go:303] Setting JSON to false
	I1031 00:00:48.174108  244037 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27800,"bootTime":1698682648,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:00:48.174201  244037 start.go:138] virtualization: kvm guest
	I1031 00:00:48.177032  244037 out.go:177] * [force-systemd-env-781077] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:00:48.178565  244037 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:00:48.178608  244037 notify.go:220] Checking for updates...
	I1031 00:00:48.180086  244037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:00:48.181663  244037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:00:48.183174  244037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:00:48.184660  244037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:00:48.186394  244037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1031 00:00:48.188312  244037 config.go:182] Loaded profile config "force-systemd-flag-768768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:48.188577  244037 config.go:182] Loaded profile config "pause-511532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:48.188706  244037 config.go:182] Loaded profile config "stopped-upgrade-237143": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1031 00:00:48.188850  244037 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:00:48.229729  244037 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 00:00:48.231084  244037 start.go:298] selected driver: kvm2
	I1031 00:00:48.231098  244037 start.go:902] validating driver "kvm2" against <nil>
	I1031 00:00:48.231113  244037 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:00:48.231947  244037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:00:48.232106  244037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:00:48.250838  244037 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:00:48.250904  244037 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 00:00:48.251193  244037 start_flags.go:916] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 00:00:48.251272  244037 cni.go:84] Creating CNI manager for ""
	I1031 00:00:48.251287  244037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:00:48.251300  244037 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1031 00:00:48.251311  244037 start_flags.go:323] config:
	{Name:force-systemd-env-781077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-env-781077 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:00:48.251506  244037 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:00:48.253506  244037 out.go:177] * Starting control plane node force-systemd-env-781077 in cluster force-systemd-env-781077
	I1031 00:00:45.287280  241323 pod_ready.go:102] pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace has status "Ready":"False"
	I1031 00:00:47.287636  241323 pod_ready.go:102] pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace has status "Ready":"False"
	I1031 00:00:48.288619  241323 pod_ready.go:92] pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:48.288654  241323 pod_ready.go:81] duration metric: took 7.573102151s waiting for pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:48.288668  241323 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:48.296630  241323 pod_ready.go:92] pod "etcd-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:48.296663  241323 pod_ready.go:81] duration metric: took 7.986966ms waiting for pod "etcd-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:48.296676  241323 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:48.090333  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | domain force-systemd-flag-768768 has defined MAC address 52:54:00:a5:87:a3 in network mk-force-systemd-flag-768768
	I1031 00:00:48.090855  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | unable to find current IP address of domain force-systemd-flag-768768 in network mk-force-systemd-flag-768768
	I1031 00:00:48.091355  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | I1031 00:00:48.090819  241896 retry.go:31] will retry after 4.617170039s: waiting for machine to come up
	I1031 00:00:52.710164  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | domain force-systemd-flag-768768 has defined MAC address 52:54:00:a5:87:a3 in network mk-force-systemd-flag-768768
	I1031 00:00:52.710723  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | unable to find current IP address of domain force-systemd-flag-768768 in network mk-force-systemd-flag-768768
	I1031 00:00:52.710752  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | I1031 00:00:52.710676  241896 retry.go:31] will retry after 5.078680813s: waiting for machine to come up
	I1031 00:00:48.254907  244037 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:00:48.254981  244037 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:00:48.254998  244037 cache.go:56] Caching tarball of preloaded images
	I1031 00:00:48.255116  244037 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:00:48.255158  244037 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:00:48.255283  244037 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/force-systemd-env-781077/config.json ...
	I1031 00:00:48.255331  244037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/force-systemd-env-781077/config.json: {Name:mk360ff71c072eeaf375fba748a73ea01ea6388d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:00:48.255532  244037 start.go:365] acquiring machines lock for force-systemd-env-781077: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:00:50.336296  241323 pod_ready.go:102] pod "kube-apiserver-pause-511532" in "kube-system" namespace has status "Ready":"False"
	I1031 00:00:51.833790  241323 pod_ready.go:92] pod "kube-apiserver-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:51.833815  241323 pod_ready.go:81] duration metric: took 3.537130782s waiting for pod "kube-apiserver-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:51.833825  241323 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.356768  241323 pod_ready.go:92] pod "kube-controller-manager-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:53.356797  241323 pod_ready.go:81] duration metric: took 1.522965203s waiting for pod "kube-controller-manager-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.356810  241323 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4gxmp" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.365647  241323 pod_ready.go:92] pod "kube-proxy-4gxmp" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:53.365672  241323 pod_ready.go:81] duration metric: took 8.85477ms waiting for pod "kube-proxy-4gxmp" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.365681  241323 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.484253  241323 pod_ready.go:92] pod "kube-scheduler-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:53.484283  241323 pod_ready.go:81] duration metric: took 118.593688ms waiting for pod "kube-scheduler-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:53.484294  241323 pod_ready.go:38] duration metric: took 12.777839975s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:00:53.484318  241323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:00:53.498059  241323 ops.go:34] apiserver oom_adj: -16
	I1031 00:00:53.498085  241323 kubeadm.go:640] restartCluster took 39.809089005s
	I1031 00:00:53.498096  241323 kubeadm.go:406] StartCluster complete in 39.979325402s
	I1031 00:00:53.498122  241323 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:00:53.498205  241323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:00:53.498948  241323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:00:53.499189  241323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:00:53.499335  241323 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:00:53.501238  241323 out.go:177] * Enabled addons: 
	I1031 00:00:53.499535  241323 config.go:182] Loaded profile config "pause-511532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:53.499846  241323 kapi.go:59] client config for pause-511532: &rest.Config{Host:"https://192.168.61.111:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.crt", KeyFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.key", CAFile:"/home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 00:00:53.502486  241323 addons.go:502] enable addons completed in 3.165218ms: enabled=[]
	I1031 00:00:53.505152  241323 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-511532" context rescaled to 1 replicas
	I1031 00:00:53.505188  241323 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:00:53.506665  241323 out.go:177] * Verifying Kubernetes components...
	I1031 00:00:53.508024  241323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:00:53.631050  241323 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 00:00:53.631067  241323 node_ready.go:35] waiting up to 6m0s for node "pause-511532" to be "Ready" ...
	I1031 00:00:53.682336  241323 node_ready.go:49] node "pause-511532" has status "Ready":"True"
	I1031 00:00:53.682358  241323 node_ready.go:38] duration metric: took 51.266139ms waiting for node "pause-511532" to be "Ready" ...
	I1031 00:00:53.682368  241323 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:00:53.885474  241323 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:54.282890  241323 pod_ready.go:92] pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:54.282926  241323 pod_ready.go:81] duration metric: took 397.425687ms waiting for pod "coredns-5dd5756b68-zrwts" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:54.282941  241323 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:54.683833  241323 pod_ready.go:92] pod "etcd-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:54.683866  241323 pod_ready.go:81] duration metric: took 400.915163ms waiting for pod "etcd-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:54.683881  241323 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.083104  241323 pod_ready.go:92] pod "kube-apiserver-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:55.083129  241323 pod_ready.go:81] duration metric: took 399.240999ms waiting for pod "kube-apiserver-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.083143  241323 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.532447  241323 pod_ready.go:92] pod "kube-controller-manager-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:55.532491  241323 pod_ready.go:81] duration metric: took 449.326123ms waiting for pod "kube-controller-manager-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.532507  241323 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4gxmp" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.882266  241323 pod_ready.go:92] pod "kube-proxy-4gxmp" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:55.882306  241323 pod_ready.go:81] duration metric: took 349.788555ms waiting for pod "kube-proxy-4gxmp" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:55.882322  241323 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:56.282454  241323 pod_ready.go:92] pod "kube-scheduler-pause-511532" in "kube-system" namespace has status "Ready":"True"
	I1031 00:00:56.282489  241323 pod_ready.go:81] duration metric: took 400.154756ms waiting for pod "kube-scheduler-pause-511532" in "kube-system" namespace to be "Ready" ...
	I1031 00:00:56.282502  241323 pod_ready.go:38] duration metric: took 2.600123556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:00:56.282543  241323 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:00:56.282600  241323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:00:56.299145  241323 api_server.go:72] duration metric: took 2.793904036s to wait for apiserver process to appear ...
	I1031 00:00:56.299171  241323 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:00:56.299192  241323 api_server.go:253] Checking apiserver healthz at https://192.168.61.111:8443/healthz ...
	I1031 00:00:56.307924  241323 api_server.go:279] https://192.168.61.111:8443/healthz returned 200:
	ok
	I1031 00:00:56.309919  241323 api_server.go:141] control plane version: v1.28.3
	I1031 00:00:56.309938  241323 api_server.go:131] duration metric: took 10.759917ms to wait for apiserver health ...
	I1031 00:00:56.309956  241323 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:00:56.485450  241323 system_pods.go:59] 6 kube-system pods found
	I1031 00:00:56.485486  241323 system_pods.go:61] "coredns-5dd5756b68-zrwts" [6dfc7640-e0b2-4e6e-bee4-6d3503590092] Running
	I1031 00:00:56.485494  241323 system_pods.go:61] "etcd-pause-511532" [485b224f-e887-44de-b3ec-83fe2c8420d7] Running
	I1031 00:00:56.485502  241323 system_pods.go:61] "kube-apiserver-pause-511532" [02c3a984-af0d-4c48-8b52-6a621539ec5b] Running
	I1031 00:00:56.485507  241323 system_pods.go:61] "kube-controller-manager-pause-511532" [3e93ef15-ac7e-4a87-a65d-c70ab4d04007] Running
	I1031 00:00:56.485519  241323 system_pods.go:61] "kube-proxy-4gxmp" [8e217fd5-df8f-442d-a8e2-f60321b379b3] Running
	I1031 00:00:56.485527  241323 system_pods.go:61] "kube-scheduler-pause-511532" [1482fc4f-80dc-4a54-967a-0da3429afc55] Running
	I1031 00:00:56.485542  241323 system_pods.go:74] duration metric: took 175.573077ms to wait for pod list to return data ...
	I1031 00:00:56.485561  241323 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:00:56.685433  241323 default_sa.go:45] found service account: "default"
	I1031 00:00:56.685480  241323 default_sa.go:55] duration metric: took 199.910936ms for default service account to be created ...
	I1031 00:00:56.685491  241323 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:00:56.885942  241323 system_pods.go:86] 6 kube-system pods found
	I1031 00:00:56.885973  241323 system_pods.go:89] "coredns-5dd5756b68-zrwts" [6dfc7640-e0b2-4e6e-bee4-6d3503590092] Running
	I1031 00:00:56.885981  241323 system_pods.go:89] "etcd-pause-511532" [485b224f-e887-44de-b3ec-83fe2c8420d7] Running
	I1031 00:00:56.885987  241323 system_pods.go:89] "kube-apiserver-pause-511532" [02c3a984-af0d-4c48-8b52-6a621539ec5b] Running
	I1031 00:00:56.885993  241323 system_pods.go:89] "kube-controller-manager-pause-511532" [3e93ef15-ac7e-4a87-a65d-c70ab4d04007] Running
	I1031 00:00:56.886004  241323 system_pods.go:89] "kube-proxy-4gxmp" [8e217fd5-df8f-442d-a8e2-f60321b379b3] Running
	I1031 00:00:56.886010  241323 system_pods.go:89] "kube-scheduler-pause-511532" [1482fc4f-80dc-4a54-967a-0da3429afc55] Running
	I1031 00:00:56.886019  241323 system_pods.go:126] duration metric: took 200.52086ms to wait for k8s-apps to be running ...
	I1031 00:00:56.886028  241323 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:00:56.886080  241323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:00:56.902609  241323 system_svc.go:56] duration metric: took 16.570785ms WaitForService to wait for kubelet.
	I1031 00:00:56.902643  241323 kubeadm.go:581] duration metric: took 3.397409967s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:00:56.902667  241323 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:00:57.084901  241323 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:00:57.085033  241323 node_conditions.go:123] node cpu capacity is 2
	I1031 00:00:57.085058  241323 node_conditions.go:105] duration metric: took 182.384315ms to run NodePressure ...
	I1031 00:00:57.085100  241323 start.go:228] waiting for startup goroutines ...
	I1031 00:00:57.085111  241323 start.go:233] waiting for cluster config update ...
	I1031 00:00:57.085121  241323 start.go:242] writing updated cluster config ...
	I1031 00:00:57.085553  241323 ssh_runner.go:195] Run: rm -f paused
	I1031 00:00:57.139405  241323 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:00:57.141924  241323 out.go:177] * Done! kubectl is now configured to use "pause-511532" cluster and "default" namespace by default
	I1031 00:00:57.791643  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | domain force-systemd-flag-768768 has defined MAC address 52:54:00:a5:87:a3 in network mk-force-systemd-flag-768768
	I1031 00:00:57.792265  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | domain force-systemd-flag-768768 has current primary IP address 192.168.39.53 and MAC address 52:54:00:a5:87:a3 in network mk-force-systemd-flag-768768
	I1031 00:00:57.792287  241703 main.go:141] libmachine: (force-systemd-flag-768768) Found IP for machine: 192.168.39.53
	I1031 00:00:57.792303  241703 main.go:141] libmachine: (force-systemd-flag-768768) Reserving static IP address...
	I1031 00:00:57.792637  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | unable to find host DHCP lease matching {name: "force-systemd-flag-768768", mac: "52:54:00:a5:87:a3", ip: "192.168.39.53"} in network mk-force-systemd-flag-768768
	I1031 00:00:57.890403  241703 main.go:141] libmachine: (force-systemd-flag-768768) Reserved static IP address: 192.168.39.53
	I1031 00:00:57.890428  241703 main.go:141] libmachine: (force-systemd-flag-768768) Waiting for SSH to be available...
	I1031 00:00:57.890450  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | Getting to WaitForSSH function...
	I1031 00:00:57.895518  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | domain force-systemd-flag-768768 has defined MAC address 52:54:00:a5:87:a3 in network mk-force-systemd-flag-768768
	I1031 00:00:57.895908  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a5:87:a3", ip: ""} in network mk-force-systemd-flag-768768
	I1031 00:00:57.895934  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | unable to find defined IP address of network mk-force-systemd-flag-768768 interface with MAC address 52:54:00:a5:87:a3
	I1031 00:00:57.896058  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | Using SSH client type: external
	I1031 00:00:57.896079  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/force-systemd-flag-768768/id_rsa (-rw-------)
	I1031 00:00:57.896116  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/force-systemd-flag-768768/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:00:57.896132  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | About to run SSH command:
	I1031 00:00:57.896145  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | exit 0
	I1031 00:00:57.900380  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | SSH cmd err, output: exit status 255: 
	I1031 00:00:57.900412  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1031 00:00:57.900425  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | command : exit 0
	I1031 00:00:57.900438  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | err     : exit status 255
	I1031 00:00:57.900451  241703 main.go:141] libmachine: (force-systemd-flag-768768) DBG | output  : 
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-30 23:59:02 UTC, ends at Tue 2023-10-31 00:01:00 UTC. --
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.718737857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=63d58146-7611-4fe1-a398-61e5e74c4625 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.719985622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7032e140-17e2-411d-8c27-b622c3a636ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.720314422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698710460720302550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=7032e140-17e2-411d-8c27-b622c3a636ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.721207628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=13a2c906-b00f-44d8-9067-1e1dd14553ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.721255498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=13a2c906-b00f-44d8-9067-1e1dd14553ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.721581804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698710439614295872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772,PodSandboxId:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698710433543986564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698710433510389068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f
5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c,PodSandboxId:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698710429682048410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f,PodSandboxId:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698710429673249372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642
211b1701,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286,PodSandboxId:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698710425866989883,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: af0758db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1698710413764295462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container
.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1698710413461581444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca,PodSandboxId:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698710409804532798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5,PodSandboxId:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698710409311306105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17
cb0,},Annotations:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699,PodSandboxId:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1e44fd5466698121ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698710409204282434,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,},Annotations:map[string
]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c,PodSandboxId:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698710394003069536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]string{io.kubernetes.container.hash: af0758db,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea,PodSandboxId:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698710393524989842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,},Annotations:map[string]string{io.kubernetes.container.hash: 811245ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=13a2c906-b00f-44d8-9067-1e1dd14553ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.763661283Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7aa7bfff-2d3b-4c8c-bf0d-e2228828ae22 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.763721904Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7aa7bfff-2d3b-4c8c-bf0d-e2228828ae22 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.764730654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b88e4891-ea9e-4ba0-93cb-cccdf2a2e525 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.765140461Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698710460765128352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=b88e4891-ea9e-4ba0-93cb-cccdf2a2e525 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.765956733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=61710c71-533d-4a6d-b20f-61e9e6ce48a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.766088474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=61710c71-533d-4a6d-b20f-61e9e6ce48a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.766476239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698710439614295872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772,PodSandboxId:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698710433543986564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698710433510389068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f
5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c,PodSandboxId:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698710429682048410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f,PodSandboxId:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698710429673249372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642
211b1701,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286,PodSandboxId:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698710425866989883,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: af0758db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1698710413764295462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container
.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1698710413461581444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca,PodSandboxId:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698710409804532798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5,PodSandboxId:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698710409311306105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17
cb0,},Annotations:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699,PodSandboxId:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1e44fd5466698121ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698710409204282434,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,},Annotations:map[string
]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c,PodSandboxId:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698710394003069536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]string{io.kubernetes.container.hash: af0758db,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea,PodSandboxId:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698710393524989842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,},Annotations:map[string]string{io.kubernetes.container.hash: 811245ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=61710c71-533d-4a6d-b20f-61e9e6ce48a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.821044606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=136d521c-04b4-48b8-b48f-39389c0a0b1d name=/runtime.v1.RuntimeService/Version
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.821105212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=136d521c-04b4-48b8-b48f-39389c0a0b1d name=/runtime.v1.RuntimeService/Version
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.822986639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f13847da-8c6a-4e9e-9bd6-7a6d7aa43268 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.823327059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698710460823313936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=f13847da-8c6a-4e9e-9bd6-7a6d7aa43268 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.824130665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=69a60665-1c5a-4966-aa9f-20bcfeea1f59 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.824177615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=69a60665-1c5a-4966-aa9f-20bcfeea1f59 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.824430433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698710439614295872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772,PodSandboxId:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698710433543986564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698710433510389068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f
5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c,PodSandboxId:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698710429682048410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f,PodSandboxId:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698710429673249372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642
211b1701,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286,PodSandboxId:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698710425866989883,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: af0758db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1698710413764295462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container
.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1698710413461581444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca,PodSandboxId:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698710409804532798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5,PodSandboxId:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698710409311306105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17
cb0,},Annotations:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699,PodSandboxId:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1e44fd5466698121ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698710409204282434,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,},Annotations:map[string
]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c,PodSandboxId:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698710394003069536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]string{io.kubernetes.container.hash: af0758db,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea,PodSandboxId:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698710393524989842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,},Annotations:map[string]string{io.kubernetes.container.hash: 811245ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=69a60665-1c5a-4966-aa9f-20bcfeea1f59 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.841864790Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=db32aec3-ca9d-4150-9c00-16ca53d9c047 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.842096729Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&PodSandboxMetadata{Name:kube-proxy-4gxmp,Uid:8e217fd5-df8f-442d-a8e2-f60321b379b3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1698710425166555344,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-30T23:59:51.893087332Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-zrwts,Uid:6dfc7640-e0b2-4e6e-bee4-6d3503590092,Na
mespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698710411833888899,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-30T23:59:52.020886403Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-511532,Uid:572544fb70b096f3120dd642211b1701,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698710411779095301,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,tier: control-plan
e,},Annotations:map[string]string{kubernetes.io/config.hash: 572544fb70b096f3120dd642211b1701,kubernetes.io/config.seen: 2023-10-30T23:59:40.109451286Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-511532,Uid:c1b50f99d94ff0f18dea14fb1f15af59,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698710411773221776,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c1b50f99d94ff0f18dea14fb1f15af59,kubernetes.io/config.seen: 2023-10-30T23:59:40.109458694Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&PodSand
boxMetadata{Name:kube-apiserver-pause-511532,Uid:e93a27823f0dd03ad34311bb2da17cb0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698710411719937616,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.111:8443,kubernetes.io/config.hash: e93a27823f0dd03ad34311bb2da17cb0,kubernetes.io/config.seen: 2023-10-30T23:59:40.109462598Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&PodSandboxMetadata{Name:etcd-pause-511532,Uid:2a72a0e28a26794ed92dee98e38f5f1b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1698710411672338284,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.111:2379,kubernetes.io/config.hash: 2a72a0e28a26794ed92dee98e38f5f1b,kubernetes.io/config.seen: 2023-10-30T23:59:40.109460558Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-511532,Uid:e93a27823f0dd03ad34311bb2da17cb0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698710408048665068,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes
.io/kube-apiserver.advertise-address.endpoint: 192.168.61.111:8443,kubernetes.io/config.hash: e93a27823f0dd03ad34311bb2da17cb0,kubernetes.io/config.seen: 2023-10-30T23:59:40.109462598Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-511532,Uid:c1b50f99d94ff0f18dea14fb1f15af59,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698710407986131244,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c1b50f99d94ff0f18dea14fb1f15af59,kubernetes.io/config.seen: 2023-10-30T23:59:40.109458694Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1
e44fd5466698121ff4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-511532,Uid:572544fb70b096f3120dd642211b1701,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1698710407537508202,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 572544fb70b096f3120dd642211b1701,kubernetes.io/config.seen: 2023-10-30T23:59:40.109451286Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&PodSandboxMetadata{Name:kube-proxy-4gxmp,Uid:8e217fd5-df8f-442d-a8e2-f60321b379b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698710392225149967,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-30T23:59:51.893087332Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-blsnn,Uid:f7c35ba2-5c5e-4908-8567-dff97d6abe21,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698710392195420580,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-30T23:59:51.991575283Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}"
file="go-grpc-middleware/chain.go:25" id=db32aec3-ca9d-4150-9c00-16ca53d9c047 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.843302085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6f1686dc-ecef-4e74-9311-1b83bb773575 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.843351300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6f1686dc-ecef-4e74-9311-1b83bb773575 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:01:00 pause-511532 crio[2596]: time="2023-10-31 00:01:00.843597208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698710439614295872,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772,PodSandboxId:1e55730dc1b07abd11c532d8a7871e3e45b062ea0e7382139226a40f814dc399,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698710433543986564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698710433510389068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f
5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c,PodSandboxId:0f0a2dd4d90c6db9277fdf5e7cfbac484f3807cb2f5cea6f77b107979a237cee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698710429682048410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17cb0,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f,PodSandboxId:f90a8cd4afbad734b2ef60065a94d6e6bf304fbea0d22e9650c6c79bd4318e22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698710429673249372,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642
211b1701,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286,PodSandboxId:c8fba2f1bcc53630ba5a90d8a8a03e8d9db0fd9c9306697600372b7d966ed8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698710425866989883,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: af0758db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b,PodSandboxId:4bd39095007b5748b31bf594cd63d5a9d85598bad72b9241803a0f3704b18a2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1698710413764295462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zrwts,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfc7640-e0b2-4e6e-bee4-6d3503590092,},Annotations:map[string]string{io.kubernetes.container
.hash: d9db6e68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008,PodSandboxId:0b41c31d9555244d6144274977d6a720acb7d26345df3cd817bd83671f1405e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1698710413461581444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a72a0e28a26794ed92dee98e38f5f1b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ddf6180,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca,PodSandboxId:cd00f79645242e8c20fc7e7588f812579d106a3c6284869b36b40af75a6798f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698710409804532798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-511532,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: c1b50f99d94ff0f18dea14fb1f15af59,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5,PodSandboxId:200bbf319b41297226b2ab030f43b2d073df0837f59e83bba2434363da015292,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698710409311306105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e93a27823f0dd03ad34311bb2da17
cb0,},Annotations:map[string]string{io.kubernetes.container.hash: cdc14025,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699,PodSandboxId:1c4f3a5357370fa18bb832cb7095a60ef3c310bde50ca1e44fd5466698121ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698710409204282434,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-511532,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572544fb70b096f3120dd642211b1701,},Annotations:map[string
]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c,PodSandboxId:41348674a41b171962cd40f9d2b740a063e65fe4a6e9b5caab0d479cbc7dd678,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698710394003069536,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4gxmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e217fd5-df8f-442d-a8e2-f60321b379b3,},Annotations:map[string]string{io.kubernetes.container.hash: af0758db,io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea,PodSandboxId:046d87fa3648114d5271154aa87120f33383dcec151de77c5e1ae4a756eb1e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698710393524989842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-blsnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c35ba2-5c5e-4908-8567-dff97d6abe21,},Annotations:map[string]string{io.kubernetes.container.hash: 811245ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6f1686dc-ecef-4e74-9311-1b83bb773575 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4fdf4edb1b322       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   21 seconds ago       Running             coredns                   2                   4bd39095007b5       coredns-5dd5756b68-zrwts
	6c53374fd13a5       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   27 seconds ago       Running             kube-scheduler            2                   1e55730dc1b07       kube-scheduler-pause-511532
	96e1476f7c8ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   27 seconds ago       Running             etcd                      2                   0b41c31d95552       etcd-pause-511532
	bc40a0a2d0c66       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   31 seconds ago       Running             kube-apiserver            2                   0f0a2dd4d90c6       kube-apiserver-pause-511532
	30c9e1047e07f       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   31 seconds ago       Running             kube-controller-manager   2                   f90a8cd4afbad       kube-controller-manager-pause-511532
	f8d5d2ac87065       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   35 seconds ago       Running             kube-proxy                1                   c8fba2f1bcc53       kube-proxy-4gxmp
	c51cabffea83f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   47 seconds ago       Exited              coredns                   1                   4bd39095007b5       coredns-5dd5756b68-zrwts
	96cc1b4a1a945       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   47 seconds ago       Exited              etcd                      1                   0b41c31d95552       etcd-pause-511532
	1dbed496ab468       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   51 seconds ago       Exited              kube-scheduler            1                   cd00f79645242       kube-scheduler-pause-511532
	269d4de2074d8       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   51 seconds ago       Exited              kube-apiserver            1                   200bbf319b412       kube-apiserver-pause-511532
	c1f16d3175523       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   51 seconds ago       Exited              kube-controller-manager   1                   1c4f3a5357370       kube-controller-manager-pause-511532
	31904c72f210d       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   About a minute ago   Exited              kube-proxy                0                   41348674a41b1       kube-proxy-4gxmp
	ca384a40b7cbf       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   046d87fa36481       coredns-5dd5756b68-blsnn
	
	* 
	* ==> coredns [4fdf4edb1b322426ea741e8fa81f880a9aa26d23afb65149dc720d2a10f2e28e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35470 - 40532 "HINFO IN 4732493365500966030.5342294527962623447. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00891906s
	
	* 
	* ==> coredns [c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34364 - 56213 "HINFO IN 5845250037702485935.8808534765531650410. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012978982s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [ca384a40b7cbfeaaf36df66cb9f73b8366504e495e4361545c82daf1e5dbf6ea] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-511532
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-511532
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=pause-511532
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_30T23_59_40_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Oct 2023 23:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-511532
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 00:00:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:00:39 +0000   Mon, 30 Oct 2023 23:59:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:00:39 +0000   Mon, 30 Oct 2023 23:59:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:00:39 +0000   Mon, 30 Oct 2023 23:59:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:00:39 +0000   Mon, 30 Oct 2023 23:59:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.111
	  Hostname:    pause-511532
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ea31e160b7c4d8c8de508a12fface0c
	  System UUID:                3ea31e16-0b7c-4d8c-8de5-08a12fface0c
	  Boot ID:                    3a72bca7-182f-4b62-b705-ba4acf68d404
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-zrwts                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     70s
	  kube-system                 etcd-pause-511532                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         81s
	  kube-system                 kube-apiserver-pause-511532             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-pause-511532    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-4gxmp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-scheduler-pause-511532             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 67s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  91s (x8 over 92s)  kubelet          Node pause-511532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x8 over 92s)  kubelet          Node pause-511532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x7 over 92s)  kubelet          Node pause-511532 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     81s                kubelet          Node pause-511532 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node pause-511532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node pause-511532 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                81s                kubelet          Node pause-511532 status is now: NodeReady
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           71s                node-controller  Node pause-511532 event: Registered Node pause-511532 in Controller
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28s (x8 over 29s)  kubelet          Node pause-511532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 29s)  kubelet          Node pause-511532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 29s)  kubelet          Node pause-511532 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10s                node-controller  Node pause-511532 event: Registered Node pause-511532 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067399] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.635892] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct30 23:59] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.173503] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.228875] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.742354] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.130505] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.201490] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.167083] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.278895] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +10.722209] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +10.405774] systemd-fstab-generator[1267]: Ignoring "noauto" for root device
	[Oct31 00:00] kauditd_printk_skb: 24 callbacks suppressed
	[  +1.207129] systemd-fstab-generator[2351]: Ignoring "noauto" for root device
	[  +0.254962] systemd-fstab-generator[2362]: Ignoring "noauto" for root device
	[  +0.290110] systemd-fstab-generator[2388]: Ignoring "noauto" for root device
	[  +0.297301] systemd-fstab-generator[2445]: Ignoring "noauto" for root device
	[  +0.471559] systemd-fstab-generator[2489]: Ignoring "noauto" for root device
	[ +21.951069] systemd-fstab-generator[3360]: Ignoring "noauto" for root device
	[  +7.495702] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.673178] hrtimer: interrupt took 2972624 ns
	
	* 
	* ==> etcd [96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008] <==
	* {"level":"info","ts":"2023-10-31T00:00:14.648645Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:15.974133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-31T00:00:15.974357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-31T00:00:15.974418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 received MsgPreVoteResp from a698de7cf8a0ada7 at term 2"}
	{"level":"info","ts":"2023-10-31T00:00:15.974464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became candidate at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:15.974488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 received MsgVoteResp from a698de7cf8a0ada7 at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:15.974515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became leader at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:15.974541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a698de7cf8a0ada7 elected leader a698de7cf8a0ada7 at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:15.982815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:00:15.983903Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a698de7cf8a0ada7","local-member-attributes":"{Name:pause-511532 ClientURLs:[https://192.168.61.111:2379]}","request-path":"/0/members/a698de7cf8a0ada7/attributes","cluster-id":"e340cdbee7b26912","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T00:00:15.98431Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:00:15.984651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.111:2379"}
	{"level":"info","ts":"2023-10-31T00:00:15.985675Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T00:00:15.985732Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T00:00:15.985986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T00:00:29.028541Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-31T00:00:29.028627Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-511532","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.111:2380"],"advertise-client-urls":["https://192.168.61.111:2379"]}
	{"level":"warn","ts":"2023-10-31T00:00:29.028846Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-31T00:00:29.029186Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-31T00:00:29.030973Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.111:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-31T00:00:29.031022Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.111:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-31T00:00:29.031309Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a698de7cf8a0ada7","current-leader-member-id":"a698de7cf8a0ada7"}
	{"level":"info","ts":"2023-10-31T00:00:29.036255Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:29.036495Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:29.036567Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-511532","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.111:2380"],"advertise-client-urls":["https://192.168.61.111:2379"]}
	
	* 
	* ==> etcd [96e1476f7c8ced05109e60047e6746fda560b19c281a1d0159091a36d64387d8] <==
	* {"level":"info","ts":"2023-10-31T00:00:35.380386Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-31T00:00:35.380397Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-31T00:00:35.380689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 switched to configuration voters=(12004589435084647847)"}
	{"level":"info","ts":"2023-10-31T00:00:35.38089Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e340cdbee7b26912","local-member-id":"a698de7cf8a0ada7","added-peer-id":"a698de7cf8a0ada7","added-peer-peer-urls":["https://192.168.61.111:2380"]}
	{"level":"info","ts":"2023-10-31T00:00:35.381055Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e340cdbee7b26912","local-member-id":"a698de7cf8a0ada7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:00:35.381125Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:00:35.384454Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-31T00:00:35.385015Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a698de7cf8a0ada7","initial-advertise-peer-urls":["https://192.168.61.111:2380"],"listen-peer-urls":["https://192.168.61.111:2380"],"advertise-client-urls":["https://192.168.61.111:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.111:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T00:00:35.385087Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T00:00:35.385404Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:35.385464Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.111:2380"}
	{"level":"info","ts":"2023-10-31T00:00:36.749092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:36.749448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:36.749564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 received MsgPreVoteResp from a698de7cf8a0ada7 at term 3"}
	{"level":"info","ts":"2023-10-31T00:00:36.749658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-31T00:00:36.74971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 received MsgVoteResp from a698de7cf8a0ada7 at term 4"}
	{"level":"info","ts":"2023-10-31T00:00:36.749852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a698de7cf8a0ada7 became leader at term 4"}
	{"level":"info","ts":"2023-10-31T00:00:36.749925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a698de7cf8a0ada7 elected leader a698de7cf8a0ada7 at term 4"}
	{"level":"info","ts":"2023-10-31T00:00:36.757347Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:00:36.757357Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a698de7cf8a0ada7","local-member-attributes":"{Name:pause-511532 ClientURLs:[https://192.168.61.111:2379]}","request-path":"/0/members/a698de7cf8a0ada7/attributes","cluster-id":"e340cdbee7b26912","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T00:00:36.757718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:00:36.75897Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.111:2379"}
	{"level":"info","ts":"2023-10-31T00:00:36.759415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T00:00:36.759661Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T00:00:36.759712Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:01:01 up 2 min,  0 users,  load average: 1.47, 0.61, 0.22
	Linux pause-511532 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5] <==
	* 
	* 
	* ==> kube-apiserver [bc40a0a2d0c66705c8e9475b6bbae234bccc8cc2f6b836f4f9b3e53c89fd0b7c] <==
	* I1031 00:00:38.723606       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1031 00:00:38.723722       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1031 00:00:38.723743       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1031 00:00:38.924208       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 00:00:38.924346       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 00:00:38.926184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1031 00:00:38.938197       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 00:00:38.943053       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1031 00:00:38.953991       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1031 00:00:38.954529       1 aggregator.go:166] initial CRD sync complete...
	I1031 00:00:38.954619       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 00:00:38.954669       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 00:00:38.954712       1 cache.go:39] Caches are synced for autoregister controller
	I1031 00:00:38.962943       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1031 00:00:38.963000       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1031 00:00:38.979019       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1031 00:00:39.019577       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1031 00:00:39.665231       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 00:00:40.491714       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 00:00:40.511876       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 00:00:40.581647       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 00:00:40.662048       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 00:00:40.676026       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 00:00:51.664426       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 00:00:51.714657       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [30c9e1047e07fd90e20fcd0d4c36552146c2a5dab7d057304525601e93866c9f] <==
	* I1031 00:00:51.364929       1 event.go:307] "Event occurred" object="pause-511532" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-511532 event: Registered Node pause-511532 in Controller"
	I1031 00:00:51.388006       1 shared_informer.go:318] Caches are synced for HPA
	I1031 00:00:51.393542       1 shared_informer.go:318] Caches are synced for GC
	I1031 00:00:51.393588       1 shared_informer.go:318] Caches are synced for PVC protection
	I1031 00:00:51.394869       1 shared_informer.go:318] Caches are synced for deployment
	I1031 00:00:51.402812       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1031 00:00:51.408428       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1031 00:00:51.410100       1 shared_informer.go:318] Caches are synced for stateful set
	I1031 00:00:51.413939       1 shared_informer.go:318] Caches are synced for ephemeral
	I1031 00:00:51.418124       1 shared_informer.go:318] Caches are synced for attach detach
	I1031 00:00:51.420010       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1031 00:00:51.426834       1 shared_informer.go:318] Caches are synced for endpoint
	I1031 00:00:51.431997       1 shared_informer.go:318] Caches are synced for resource quota
	I1031 00:00:51.432060       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1031 00:00:51.437013       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1031 00:00:51.438414       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1031 00:00:51.439497       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1031 00:00:51.440725       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1031 00:00:51.446478       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1031 00:00:51.516542       1 shared_informer.go:318] Caches are synced for resource quota
	I1031 00:00:51.626046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="217.484204ms"
	I1031 00:00:51.626648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="285.457µs"
	I1031 00:00:51.851402       1 shared_informer.go:318] Caches are synced for garbage collector
	I1031 00:00:51.910445       1 shared_informer.go:318] Caches are synced for garbage collector
	I1031 00:00:51.910547       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699] <==
	* 
	* 
	* ==> kube-proxy [31904c72f210d02df2382474f051850a9fa54ff58a1867f7dfe89e41a988955c] <==
	* I1030 23:59:54.255850       1 server_others.go:69] "Using iptables proxy"
	I1030 23:59:54.273475       1 node.go:141] Successfully retrieved node IP: 192.168.61.111
	I1030 23:59:54.349114       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1030 23:59:54.349254       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1030 23:59:54.371328       1 server_others.go:152] "Using iptables Proxier"
	I1030 23:59:54.372314       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1030 23:59:54.376847       1 server.go:846] "Version info" version="v1.28.3"
	I1030 23:59:54.376982       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1030 23:59:54.381348       1 config.go:188] "Starting service config controller"
	I1030 23:59:54.382251       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1030 23:59:54.382390       1 config.go:315] "Starting node config controller"
	I1030 23:59:54.382547       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1030 23:59:54.384447       1 config.go:97] "Starting endpoint slice config controller"
	I1030 23:59:54.401257       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1030 23:59:54.483252       1 shared_informer.go:318] Caches are synced for node config
	I1030 23:59:54.483330       1 shared_informer.go:318] Caches are synced for service config
	I1030 23:59:54.501718       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [f8d5d2ac870654e412cd4d5e35ae60713dcdd82c7029f6e23c6f38de39aa2286] <==
	* I1031 00:00:26.029290       1 server_others.go:69] "Using iptables proxy"
	E1031 00:00:26.032074       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-511532": dial tcp 192.168.61.111:8443: connect: connection refused
	E1031 00:00:27.192310       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-511532": dial tcp 192.168.61.111:8443: connect: connection refused
	E1031 00:00:29.295171       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-511532": dial tcp 192.168.61.111:8443: connect: connection refused
	I1031 00:00:38.985403       1 node.go:141] Successfully retrieved node IP: 192.168.61.111
	I1031 00:00:39.111935       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 00:00:39.111999       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 00:00:39.126019       1 server_others.go:152] "Using iptables Proxier"
	I1031 00:00:39.126141       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 00:00:39.126867       1 server.go:846] "Version info" version="v1.28.3"
	I1031 00:00:39.126927       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:00:39.130668       1 config.go:315] "Starting node config controller"
	I1031 00:00:39.130723       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 00:00:39.131005       1 config.go:188] "Starting service config controller"
	I1031 00:00:39.131072       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 00:00:39.131167       1 config.go:97] "Starting endpoint slice config controller"
	I1031 00:00:39.131230       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 00:00:39.231538       1 shared_informer.go:318] Caches are synced for node config
	I1031 00:00:39.231637       1 shared_informer.go:318] Caches are synced for service config
	I1031 00:00:39.231524       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca] <==
	* 
	* 
	* ==> kube-scheduler [6c53374fd13a57d9260e1d58829e1b7a69848d385cc036b9ee3037fd77360772] <==
	* I1031 00:00:35.911526       1 serving.go:348] Generated self-signed cert in-memory
	I1031 00:00:39.049835       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1031 00:00:39.049975       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:00:39.093667       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1031 00:00:39.094104       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1031 00:00:39.094174       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1031 00:00:39.094211       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1031 00:00:39.106690       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1031 00:00:39.109959       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 00:00:39.107092       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1031 00:00:39.110327       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1031 00:00:39.195272       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1031 00:00:39.210258       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 00:00:39.210883       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-30 23:59:02 UTC, ends at Tue 2023-10-31 00:01:02 UTC. --
	Oct 31 00:00:33 pause-511532 kubelet[3366]: E1031 00:00:33.412070    3366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-511532&limit=500&resourceVersion=0": dial tcp 192.168.61.111:8443: connect: connection refused
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.456349    3366 scope.go:117] "RemoveContainer" containerID="96cc1b4a1a945521db83518a5a4893c4c70a332ff1bc39d52a2e7314ab907008"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.458872    3366 scope.go:117] "RemoveContainer" containerID="269d4de2074d82da8fa20b06caabc445adb0e2c0e7dfd583a9d6d6bf5b7d02b5"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.460419    3366 scope.go:117] "RemoveContainer" containerID="c1f16d3175523917e560a3be043acbc0928640caa3114c523e67e5fc9144d699"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.463186    3366 scope.go:117] "RemoveContainer" containerID="1dbed496ab468bb1ed4837e57561cbb947d82ec054f03630c6009c68f77eadca"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: W1031 00:00:33.575580    3366 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.111:8443: connect: connection refused
	Oct 31 00:00:33 pause-511532 kubelet[3366]: E1031 00:00:33.575690    3366 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.111:8443: connect: connection refused
	Oct 31 00:00:33 pause-511532 kubelet[3366]: E1031 00:00:33.722087    3366 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-511532?timeout=10s\": dial tcp 192.168.61.111:8443: connect: connection refused" interval="1.6s"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: I1031 00:00:33.825032    3366 kubelet_node_status.go:70] "Attempting to register node" node="pause-511532"
	Oct 31 00:00:33 pause-511532 kubelet[3366]: E1031 00:00:33.825902    3366 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.111:8443: connect: connection refused" node="pause-511532"
	Oct 31 00:00:35 pause-511532 kubelet[3366]: I1031 00:00:35.428596    3366 kubelet_node_status.go:70] "Attempting to register node" node="pause-511532"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.010683    3366 kubelet_node_status.go:108] "Node was previously registered" node="pause-511532"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.010874    3366 kubelet_node_status.go:73] "Successfully registered node" node="pause-511532"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.013068    3366 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.014463    3366 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.274115    3366 apiserver.go:52] "Watching apiserver"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.278572    3366 topology_manager.go:215] "Topology Admit Handler" podUID="f7c35ba2-5c5e-4908-8567-dff97d6abe21" podNamespace="kube-system" podName="coredns-5dd5756b68-blsnn"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.278900    3366 topology_manager.go:215] "Topology Admit Handler" podUID="6dfc7640-e0b2-4e6e-bee4-6d3503590092" podNamespace="kube-system" podName="coredns-5dd5756b68-zrwts"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.279044    3366 topology_manager.go:215] "Topology Admit Handler" podUID="8e217fd5-df8f-442d-a8e2-f60321b379b3" podNamespace="kube-system" podName="kube-proxy-4gxmp"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.305692    3366 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.389537    3366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e217fd5-df8f-442d-a8e2-f60321b379b3-xtables-lock\") pod \"kube-proxy-4gxmp\" (UID: \"8e217fd5-df8f-442d-a8e2-f60321b379b3\") " pod="kube-system/kube-proxy-4gxmp"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.389625    3366 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e217fd5-df8f-442d-a8e2-f60321b379b3-lib-modules\") pod \"kube-proxy-4gxmp\" (UID: \"8e217fd5-df8f-442d-a8e2-f60321b379b3\") " pod="kube-system/kube-proxy-4gxmp"
	Oct 31 00:00:39 pause-511532 kubelet[3366]: I1031 00:00:39.584094    3366 scope.go:117] "RemoveContainer" containerID="c51cabffea83ffd4468990c967e6b1ed49c333afe9e7cd0db27b3d46844d182b"
	Oct 31 00:00:40 pause-511532 kubelet[3366]: I1031 00:00:40.450989    3366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f7c35ba2-5c5e-4908-8567-dff97d6abe21" path="/var/lib/kubelet/pods/f7c35ba2-5c5e-4908-8567-dff97d6abe21/volumes"
	Oct 31 00:00:48 pause-511532 kubelet[3366]: I1031 00:00:48.102133    3366 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-511532 -n pause-511532
helpers_test.go:261: (dbg) Run:  kubectl --context pause-511532 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (68.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-225140 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-225140 --alsologtostderr -v=3: exit status 82 (2m1.706512269s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-225140"  ...
	* Stopping node "old-k8s-version-225140"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 00:04:43.270297  246832 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:04:43.270423  246832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:04:43.270430  246832 out.go:309] Setting ErrFile to fd 2...
	I1031 00:04:43.270439  246832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:04:43.270654  246832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:04:43.270883  246832 out.go:303] Setting JSON to false
	I1031 00:04:43.270964  246832 mustload.go:65] Loading cluster: old-k8s-version-225140
	I1031 00:04:43.271333  246832 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:04:43.271411  246832 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/config.json ...
	I1031 00:04:43.271568  246832 mustload.go:65] Loading cluster: old-k8s-version-225140
	I1031 00:04:43.271667  246832 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:04:43.271700  246832 stop.go:39] StopHost: old-k8s-version-225140
	I1031 00:04:43.272092  246832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:04:43.272144  246832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:04:43.287114  246832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I1031 00:04:43.287629  246832 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:04:43.288305  246832 main.go:141] libmachine: Using API Version  1
	I1031 00:04:43.288326  246832 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:04:43.288798  246832 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:04:43.291678  246832 out.go:177] * Stopping node "old-k8s-version-225140"  ...
	I1031 00:04:43.293117  246832 main.go:141] libmachine: Stopping "old-k8s-version-225140"...
	I1031 00:04:43.293144  246832 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:04:43.294898  246832 main.go:141] libmachine: (old-k8s-version-225140) Calling .Stop
	I1031 00:04:43.298823  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 0/60
	I1031 00:04:44.301131  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 1/60
	I1031 00:04:45.303507  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 2/60
	I1031 00:04:46.305232  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 3/60
	I1031 00:04:47.306857  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 4/60
	I1031 00:04:48.309031  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 5/60
	I1031 00:04:49.310379  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 6/60
	I1031 00:04:50.312002  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 7/60
	I1031 00:04:51.313526  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 8/60
	I1031 00:04:52.315703  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 9/60
	I1031 00:04:53.317475  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 10/60
	I1031 00:04:54.319663  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 11/60
	I1031 00:04:55.321286  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 12/60
	I1031 00:04:56.323445  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 13/60
	I1031 00:04:57.325976  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 14/60
	I1031 00:04:58.327464  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 15/60
	I1031 00:04:59.329371  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 16/60
	I1031 00:05:00.331586  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 17/60
	I1031 00:05:01.333065  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 18/60
	I1031 00:05:02.334757  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 19/60
	I1031 00:05:03.336885  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 20/60
	I1031 00:05:04.338513  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 21/60
	I1031 00:05:05.340076  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 22/60
	I1031 00:05:06.341489  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 23/60
	I1031 00:05:07.343732  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 24/60
	I1031 00:05:08.345865  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 25/60
	I1031 00:05:09.347327  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 26/60
	I1031 00:05:10.349272  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 27/60
	I1031 00:05:11.351633  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 28/60
	I1031 00:05:12.353852  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 29/60
	I1031 00:05:13.356036  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 30/60
	I1031 00:05:14.357650  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 31/60
	I1031 00:05:15.359435  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 32/60
	I1031 00:05:16.360611  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 33/60
	I1031 00:05:17.361865  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 34/60
	I1031 00:05:18.364036  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 35/60
	I1031 00:05:19.365385  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 36/60
	I1031 00:05:20.367568  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 37/60
	I1031 00:05:21.368656  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 38/60
	I1031 00:05:22.370110  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 39/60
	I1031 00:05:23.372361  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 40/60
	I1031 00:05:24.373744  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 41/60
	I1031 00:05:25.375148  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 42/60
	I1031 00:05:26.376489  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 43/60
	I1031 00:05:27.377829  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 44/60
	I1031 00:05:28.379486  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 45/60
	I1031 00:05:29.380874  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 46/60
	I1031 00:05:30.382276  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 47/60
	I1031 00:05:31.383718  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 48/60
	I1031 00:05:32.385070  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 49/60
	I1031 00:05:33.387469  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 50/60
	I1031 00:05:34.388986  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 51/60
	I1031 00:05:35.390249  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 52/60
	I1031 00:05:36.392556  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 53/60
	I1031 00:05:37.393909  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 54/60
	I1031 00:05:38.395690  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 55/60
	I1031 00:05:39.397917  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 56/60
	I1031 00:05:40.399206  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 57/60
	I1031 00:05:41.400676  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 58/60
	I1031 00:05:42.401910  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 59/60
	I1031 00:05:43.403243  246832 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1031 00:05:43.403304  246832 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:05:43.403330  246832 retry.go:31] will retry after 1.356587657s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:05:44.760818  246832 stop.go:39] StopHost: old-k8s-version-225140
	I1031 00:05:44.761266  246832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:05:44.761338  246832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:05:44.775888  246832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1031 00:05:44.776431  246832 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:05:44.776976  246832 main.go:141] libmachine: Using API Version  1
	I1031 00:05:44.776997  246832 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:05:44.777344  246832 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:05:44.779619  246832 out.go:177] * Stopping node "old-k8s-version-225140"  ...
	I1031 00:05:44.781078  246832 main.go:141] libmachine: Stopping "old-k8s-version-225140"...
	I1031 00:05:44.781096  246832 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:05:44.782778  246832 main.go:141] libmachine: (old-k8s-version-225140) Calling .Stop
	I1031 00:05:44.786397  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 0/60
	I1031 00:05:45.787849  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 1/60
	I1031 00:05:46.789361  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 2/60
	I1031 00:05:47.790663  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 3/60
	I1031 00:05:48.792152  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 4/60
	I1031 00:05:49.794100  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 5/60
	I1031 00:05:50.795794  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 6/60
	I1031 00:05:51.797140  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 7/60
	I1031 00:05:52.798746  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 8/60
	I1031 00:05:53.800172  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 9/60
	I1031 00:05:54.802147  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 10/60
	I1031 00:05:55.804362  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 11/60
	I1031 00:05:56.805962  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 12/60
	I1031 00:05:57.807840  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 13/60
	I1031 00:05:58.809173  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 14/60
	I1031 00:05:59.810954  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 15/60
	I1031 00:06:00.812522  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 16/60
	I1031 00:06:01.814683  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 17/60
	I1031 00:06:02.816015  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 18/60
	I1031 00:06:03.817657  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 19/60
	I1031 00:06:04.819403  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 20/60
	I1031 00:06:05.821289  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 21/60
	I1031 00:06:06.823490  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 22/60
	I1031 00:06:07.825009  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 23/60
	I1031 00:06:08.826408  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 24/60
	I1031 00:06:09.828162  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 25/60
	I1031 00:06:10.829633  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 26/60
	I1031 00:06:11.830997  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 27/60
	I1031 00:06:12.832274  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 28/60
	I1031 00:06:13.833638  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 29/60
	I1031 00:06:14.835659  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 30/60
	I1031 00:06:15.837016  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 31/60
	I1031 00:06:16.838385  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 32/60
	I1031 00:06:17.839635  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 33/60
	I1031 00:06:18.840952  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 34/60
	I1031 00:06:19.842716  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 35/60
	I1031 00:06:20.844788  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 36/60
	I1031 00:06:21.845962  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 37/60
	I1031 00:06:22.847236  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 38/60
	I1031 00:06:23.848580  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 39/60
	I1031 00:06:24.850139  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 40/60
	I1031 00:06:25.851542  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 41/60
	I1031 00:06:26.852865  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 42/60
	I1031 00:06:27.854088  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 43/60
	I1031 00:06:28.855548  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 44/60
	I1031 00:06:29.857805  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 45/60
	I1031 00:06:30.858818  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 46/60
	I1031 00:06:31.860044  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 47/60
	I1031 00:06:32.861102  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 48/60
	I1031 00:06:33.863243  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 49/60
	I1031 00:06:34.864850  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 50/60
	I1031 00:06:35.866260  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 51/60
	I1031 00:06:36.867527  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 52/60
	I1031 00:06:37.868922  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 53/60
	I1031 00:06:38.869987  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 54/60
	I1031 00:06:39.871376  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 55/60
	I1031 00:06:40.872549  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 56/60
	I1031 00:06:41.874030  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 57/60
	I1031 00:06:42.875752  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 58/60
	I1031 00:06:43.877324  246832 main.go:141] libmachine: (old-k8s-version-225140) Waiting for machine to stop 59/60
	I1031 00:06:44.878089  246832 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1031 00:06:44.878144  246832 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:06:44.880431  246832 out.go:177] 
	W1031 00:06:44.882062  246832 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1031 00:06:44.882085  246832 out.go:239] * 
	* 
	W1031 00:06:44.888094  246832 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 00:06:44.889831  246832 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-225140 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140: exit status 3 (18.477512545s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:07:03.369283  247795 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.65:22: connect: no route to host
	E1031 00:07:03.369305  247795 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.65:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-225140" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-640155 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-640155 --alsologtostderr -v=3: exit status 82 (2m1.4816892s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-640155"  ...
	* Stopping node "no-preload-640155"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 00:05:12.520761  247030 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:05:12.520961  247030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:05:12.520975  247030 out.go:309] Setting ErrFile to fd 2...
	I1031 00:05:12.520983  247030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:05:12.521442  247030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:05:12.522002  247030 out.go:303] Setting JSON to false
	I1031 00:05:12.522238  247030 mustload.go:65] Loading cluster: no-preload-640155
	I1031 00:05:12.522661  247030 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:05:12.522737  247030 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/config.json ...
	I1031 00:05:12.522903  247030 mustload.go:65] Loading cluster: no-preload-640155
	I1031 00:05:12.523003  247030 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:05:12.523031  247030 stop.go:39] StopHost: no-preload-640155
	I1031 00:05:12.523399  247030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:05:12.523458  247030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:05:12.539202  247030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
	I1031 00:05:12.539706  247030 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:05:12.540359  247030 main.go:141] libmachine: Using API Version  1
	I1031 00:05:12.540386  247030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:05:12.540845  247030 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:05:12.542831  247030 out.go:177] * Stopping node "no-preload-640155"  ...
	I1031 00:05:12.544962  247030 main.go:141] libmachine: Stopping "no-preload-640155"...
	I1031 00:05:12.544992  247030 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:05:12.546794  247030 main.go:141] libmachine: (no-preload-640155) Calling .Stop
	I1031 00:05:12.551395  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 0/60
	I1031 00:05:13.554180  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 1/60
	I1031 00:05:14.555550  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 2/60
	I1031 00:05:15.557527  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 3/60
	I1031 00:05:16.559281  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 4/60
	I1031 00:05:17.561072  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 5/60
	I1031 00:05:18.563841  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 6/60
	I1031 00:05:19.565120  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 7/60
	I1031 00:05:20.567539  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 8/60
	I1031 00:05:21.569287  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 9/60
	I1031 00:05:22.571722  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 10/60
	I1031 00:05:23.573488  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 11/60
	I1031 00:05:24.574988  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 12/60
	I1031 00:05:25.576557  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 13/60
	I1031 00:05:26.578016  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 14/60
	I1031 00:05:27.580126  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 15/60
	I1031 00:05:28.581642  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 16/60
	I1031 00:05:29.583505  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 17/60
	I1031 00:05:30.585869  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 18/60
	I1031 00:05:31.587994  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 19/60
	I1031 00:05:32.589618  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 20/60
	I1031 00:05:33.591764  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 21/60
	I1031 00:05:34.593222  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 22/60
	I1031 00:05:35.594599  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 23/60
	I1031 00:05:36.596266  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 24/60
	I1031 00:05:37.597995  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 25/60
	I1031 00:05:38.599634  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 26/60
	I1031 00:05:39.601113  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 27/60
	I1031 00:05:40.602647  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 28/60
	I1031 00:05:41.604293  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 29/60
	I1031 00:05:42.605761  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 30/60
	I1031 00:05:43.607331  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 31/60
	I1031 00:05:44.609623  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 32/60
	I1031 00:05:45.611701  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 33/60
	I1031 00:05:46.613345  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 34/60
	I1031 00:05:47.614872  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 35/60
	I1031 00:05:48.616384  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 36/60
	I1031 00:05:49.618015  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 37/60
	I1031 00:05:50.619473  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 38/60
	I1031 00:05:51.620872  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 39/60
	I1031 00:05:52.622664  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 40/60
	I1031 00:05:53.624098  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 41/60
	I1031 00:05:54.625659  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 42/60
	I1031 00:05:55.627405  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 43/60
	I1031 00:05:56.629026  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 44/60
	I1031 00:05:57.631104  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 45/60
	I1031 00:05:58.632590  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 46/60
	I1031 00:05:59.634198  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 47/60
	I1031 00:06:00.635511  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 48/60
	I1031 00:06:01.636876  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 49/60
	I1031 00:06:02.638483  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 50/60
	I1031 00:06:03.639631  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 51/60
	I1031 00:06:04.642006  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 52/60
	I1031 00:06:05.643687  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 53/60
	I1031 00:06:06.645235  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 54/60
	I1031 00:06:07.647449  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 55/60
	I1031 00:06:08.648638  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 56/60
	I1031 00:06:09.650130  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 57/60
	I1031 00:06:10.651302  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 58/60
	I1031 00:06:11.652921  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 59/60
	I1031 00:06:12.654036  247030 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1031 00:06:12.654156  247030 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:06:12.654187  247030 retry.go:31] will retry after 1.125509584s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:06:13.780422  247030 stop.go:39] StopHost: no-preload-640155
	I1031 00:06:13.780922  247030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:06:13.780995  247030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:06:13.802080  247030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I1031 00:06:13.802570  247030 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:06:13.803190  247030 main.go:141] libmachine: Using API Version  1
	I1031 00:06:13.803213  247030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:06:13.803843  247030 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:06:13.805633  247030 out.go:177] * Stopping node "no-preload-640155"  ...
	I1031 00:06:13.807640  247030 main.go:141] libmachine: Stopping "no-preload-640155"...
	I1031 00:06:13.807669  247030 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:06:13.809489  247030 main.go:141] libmachine: (no-preload-640155) Calling .Stop
	I1031 00:06:13.814480  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 0/60
	I1031 00:06:14.815810  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 1/60
	I1031 00:06:15.817649  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 2/60
	I1031 00:06:16.819660  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 3/60
	I1031 00:06:17.821145  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 4/60
	I1031 00:06:18.823053  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 5/60
	I1031 00:06:19.824705  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 6/60
	I1031 00:06:20.825879  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 7/60
	I1031 00:06:21.827392  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 8/60
	I1031 00:06:22.828990  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 9/60
	I1031 00:06:23.831199  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 10/60
	I1031 00:06:24.833091  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 11/60
	I1031 00:06:25.834518  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 12/60
	I1031 00:06:26.836454  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 13/60
	I1031 00:06:27.837965  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 14/60
	I1031 00:06:28.839832  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 15/60
	I1031 00:06:29.841455  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 16/60
	I1031 00:06:30.843554  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 17/60
	I1031 00:06:31.845017  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 18/60
	I1031 00:06:32.846367  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 19/60
	I1031 00:06:33.848354  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 20/60
	I1031 00:06:34.849878  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 21/60
	I1031 00:06:35.851641  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 22/60
	I1031 00:06:36.853711  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 23/60
	I1031 00:06:37.855095  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 24/60
	I1031 00:06:38.856807  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 25/60
	I1031 00:06:39.858383  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 26/60
	I1031 00:06:40.860118  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 27/60
	I1031 00:06:41.861477  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 28/60
	I1031 00:06:42.863093  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 29/60
	I1031 00:06:43.865336  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 30/60
	I1031 00:06:44.867671  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 31/60
	I1031 00:06:45.869278  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 32/60
	I1031 00:06:46.870581  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 33/60
	I1031 00:06:47.871741  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 34/60
	I1031 00:06:48.873814  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 35/60
	I1031 00:06:49.875165  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 36/60
	I1031 00:06:50.876555  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 37/60
	I1031 00:06:51.878259  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 38/60
	I1031 00:06:52.880021  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 39/60
	I1031 00:06:53.882093  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 40/60
	I1031 00:06:54.883567  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 41/60
	I1031 00:06:55.885264  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 42/60
	I1031 00:06:56.887514  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 43/60
	I1031 00:06:57.888886  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 44/60
	I1031 00:06:58.890747  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 45/60
	I1031 00:06:59.892254  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 46/60
	I1031 00:07:00.893702  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 47/60
	I1031 00:07:01.895075  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 48/60
	I1031 00:07:02.896410  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 49/60
	I1031 00:07:03.898771  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 50/60
	I1031 00:07:04.900134  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 51/60
	I1031 00:07:05.901297  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 52/60
	I1031 00:07:06.903368  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 53/60
	I1031 00:07:07.904736  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 54/60
	I1031 00:07:08.906834  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 55/60
	I1031 00:07:09.908231  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 56/60
	I1031 00:07:10.910213  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 57/60
	I1031 00:07:11.911497  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 58/60
	I1031 00:07:12.913001  247030 main.go:141] libmachine: (no-preload-640155) Waiting for machine to stop 59/60
	I1031 00:07:13.913989  247030 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1031 00:07:13.914042  247030 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:07:13.916234  247030 out.go:177] 
	W1031 00:07:13.917960  247030 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1031 00:07:13.917980  247030 out.go:239] * 
	* 
	W1031 00:07:13.922163  247030 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 00:07:13.923518  247030 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-640155 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155: exit status 3 (18.626737427s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:07:32.553341  248010 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.168:22: connect: no route to host
	E1031 00:07:32.553362  248010 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.168:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-640155" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-078843 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-078843 --alsologtostderr -v=3: exit status 82 (2m1.224155695s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-078843"  ...
	* Stopping node "embed-certs-078843"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 00:06:05.360746  247319 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:06:05.360924  247319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:06:05.360953  247319 out.go:309] Setting ErrFile to fd 2...
	I1031 00:06:05.360962  247319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:06:05.361236  247319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:06:05.361614  247319 out.go:303] Setting JSON to false
	I1031 00:06:05.361755  247319 mustload.go:65] Loading cluster: embed-certs-078843
	I1031 00:06:05.362216  247319 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:06:05.362317  247319 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/config.json ...
	I1031 00:06:05.362563  247319 mustload.go:65] Loading cluster: embed-certs-078843
	I1031 00:06:05.362747  247319 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:06:05.362821  247319 stop.go:39] StopHost: embed-certs-078843
	I1031 00:06:05.363426  247319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:06:05.363502  247319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:06:05.379978  247319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I1031 00:06:05.380493  247319 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:06:05.381237  247319 main.go:141] libmachine: Using API Version  1
	I1031 00:06:05.381265  247319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:06:05.381681  247319 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:06:05.383595  247319 out.go:177] * Stopping node "embed-certs-078843"  ...
	I1031 00:06:05.385336  247319 main.go:141] libmachine: Stopping "embed-certs-078843"...
	I1031 00:06:05.385354  247319 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:06:05.387385  247319 main.go:141] libmachine: (embed-certs-078843) Calling .Stop
	I1031 00:06:05.391330  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 0/60
	I1031 00:06:06.392809  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 1/60
	I1031 00:06:07.394026  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 2/60
	I1031 00:06:08.395982  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 3/60
	I1031 00:06:09.397891  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 4/60
	I1031 00:06:10.399974  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 5/60
	I1031 00:06:11.401526  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 6/60
	I1031 00:06:12.403621  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 7/60
	I1031 00:06:13.405073  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 8/60
	I1031 00:06:14.406669  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 9/60
	I1031 00:06:15.409007  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 10/60
	I1031 00:06:16.410547  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 11/60
	I1031 00:06:17.412126  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 12/60
	I1031 00:06:18.413498  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 13/60
	I1031 00:06:19.415567  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 14/60
	I1031 00:06:20.417713  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 15/60
	I1031 00:06:21.419318  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 16/60
	I1031 00:06:22.420593  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 17/60
	I1031 00:06:23.422091  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 18/60
	I1031 00:06:24.423501  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 19/60
	I1031 00:06:25.425880  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 20/60
	I1031 00:06:26.428087  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 21/60
	I1031 00:06:27.429569  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 22/60
	I1031 00:06:28.431500  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 23/60
	I1031 00:06:29.433000  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 24/60
	I1031 00:06:30.435369  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 25/60
	I1031 00:06:31.436880  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 26/60
	I1031 00:06:32.438538  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 27/60
	I1031 00:06:33.439802  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 28/60
	I1031 00:06:34.441183  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 29/60
	I1031 00:06:35.443220  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 30/60
	I1031 00:06:36.444696  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 31/60
	I1031 00:06:37.446218  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 32/60
	I1031 00:06:38.447598  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 33/60
	I1031 00:06:39.449189  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 34/60
	I1031 00:06:40.450816  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 35/60
	I1031 00:06:41.452354  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 36/60
	I1031 00:06:42.453959  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 37/60
	I1031 00:06:43.455737  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 38/60
	I1031 00:06:44.457538  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 39/60
	I1031 00:06:45.459952  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 40/60
	I1031 00:06:46.461193  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 41/60
	I1031 00:06:47.462463  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 42/60
	I1031 00:06:48.464085  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 43/60
	I1031 00:06:49.465954  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 44/60
	I1031 00:06:50.467824  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 45/60
	I1031 00:06:51.469152  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 46/60
	I1031 00:06:52.471618  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 47/60
	I1031 00:06:53.473467  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 48/60
	I1031 00:06:54.475079  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 49/60
	I1031 00:06:55.477524  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 50/60
	I1031 00:06:56.479119  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 51/60
	I1031 00:06:57.480533  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 52/60
	I1031 00:06:58.482058  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 53/60
	I1031 00:06:59.483301  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 54/60
	I1031 00:07:00.484917  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 55/60
	I1031 00:07:01.486698  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 56/60
	I1031 00:07:02.488158  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 57/60
	I1031 00:07:03.490497  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 58/60
	I1031 00:07:04.491683  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 59/60
	I1031 00:07:05.493103  247319 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1031 00:07:05.493200  247319 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:07:05.493231  247319 retry.go:31] will retry after 886.413673ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:07:06.380254  247319 stop.go:39] StopHost: embed-certs-078843
	I1031 00:07:06.380623  247319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:07:06.380666  247319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:07:06.396717  247319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I1031 00:07:06.397243  247319 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:07:06.397839  247319 main.go:141] libmachine: Using API Version  1
	I1031 00:07:06.397873  247319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:07:06.398259  247319 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:07:06.400337  247319 out.go:177] * Stopping node "embed-certs-078843"  ...
	I1031 00:07:06.401637  247319 main.go:141] libmachine: Stopping "embed-certs-078843"...
	I1031 00:07:06.401653  247319 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:07:06.403262  247319 main.go:141] libmachine: (embed-certs-078843) Calling .Stop
	I1031 00:07:06.407094  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 0/60
	I1031 00:07:07.408683  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 1/60
	I1031 00:07:08.410886  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 2/60
	I1031 00:07:09.412298  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 3/60
	I1031 00:07:10.413792  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 4/60
	I1031 00:07:11.415961  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 5/60
	I1031 00:07:12.417245  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 6/60
	I1031 00:07:13.419073  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 7/60
	I1031 00:07:14.420432  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 8/60
	I1031 00:07:15.422203  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 9/60
	I1031 00:07:16.423790  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 10/60
	I1031 00:07:17.425330  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 11/60
	I1031 00:07:18.426682  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 12/60
	I1031 00:07:19.428827  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 13/60
	I1031 00:07:20.430181  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 14/60
	I1031 00:07:21.432057  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 15/60
	I1031 00:07:22.433663  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 16/60
	I1031 00:07:23.435485  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 17/60
	I1031 00:07:24.437719  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 18/60
	I1031 00:07:25.439248  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 19/60
	I1031 00:07:26.441132  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 20/60
	I1031 00:07:27.442473  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 21/60
	I1031 00:07:28.444172  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 22/60
	I1031 00:07:29.445672  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 23/60
	I1031 00:07:30.447621  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 24/60
	I1031 00:07:31.449537  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 25/60
	I1031 00:07:32.450897  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 26/60
	I1031 00:07:33.452334  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 27/60
	I1031 00:07:34.453781  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 28/60
	I1031 00:07:35.455155  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 29/60
	I1031 00:07:36.456926  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 30/60
	I1031 00:07:37.458383  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 31/60
	I1031 00:07:38.459809  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 32/60
	I1031 00:07:39.461387  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 33/60
	I1031 00:07:40.462738  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 34/60
	I1031 00:07:41.464678  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 35/60
	I1031 00:07:42.466152  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 36/60
	I1031 00:07:43.467528  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 37/60
	I1031 00:07:44.469038  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 38/60
	I1031 00:07:45.470263  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 39/60
	I1031 00:07:46.472177  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 40/60
	I1031 00:07:47.473610  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 41/60
	I1031 00:07:48.475351  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 42/60
	I1031 00:07:49.476925  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 43/60
	I1031 00:07:50.478240  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 44/60
	I1031 00:07:51.479933  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 45/60
	I1031 00:07:52.481261  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 46/60
	I1031 00:07:53.482798  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 47/60
	I1031 00:07:54.484167  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 48/60
	I1031 00:07:55.485521  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 49/60
	I1031 00:07:56.487281  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 50/60
	I1031 00:07:57.488783  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 51/60
	I1031 00:07:58.490264  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 52/60
	I1031 00:07:59.491926  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 53/60
	I1031 00:08:00.493336  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 54/60
	I1031 00:08:01.495078  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 55/60
	I1031 00:08:02.496546  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 56/60
	I1031 00:08:03.497944  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 57/60
	I1031 00:08:04.499452  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 58/60
	I1031 00:08:05.500797  247319 main.go:141] libmachine: (embed-certs-078843) Waiting for machine to stop 59/60
	I1031 00:08:06.501680  247319 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1031 00:08:06.501745  247319 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:08:06.503574  247319 out.go:177] 
	W1031 00:08:06.504948  247319 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1031 00:08:06.504975  247319 out.go:239] * 
	* 
	W1031 00:08:06.509002  247319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 00:08:06.510435  247319 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-078843 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843: exit status 3 (18.52093141s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:08:25.033276  248510 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host
	E1031 00:08:25.033300  248510 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-078843" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140: exit status 3 (3.199548786s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:07:06.569222  247868 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.65:22: connect: no route to host
	E1031 00:07:06.569239  247868 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.65:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-225140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1031 00:07:08.185507  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-225140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155490984s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.65:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-225140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140: exit status 3 (3.059949905s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:07:15.785302  247980 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.65:22: connect: no route to host
	E1031 00:07:15.785323  247980 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.65:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-225140" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-892233 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-892233 --alsologtostderr -v=3: exit status 82 (2m0.883380975s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-892233"  ...
	* Stopping node "default-k8s-diff-port-892233"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 00:07:27.226461  248242 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:07:27.226732  248242 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:07:27.226744  248242 out.go:309] Setting ErrFile to fd 2...
	I1031 00:07:27.226749  248242 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:07:27.226968  248242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:07:27.227348  248242 out.go:303] Setting JSON to false
	I1031 00:07:27.227459  248242 mustload.go:65] Loading cluster: default-k8s-diff-port-892233
	I1031 00:07:27.227867  248242 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:07:27.227965  248242 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:07:27.228166  248242 mustload.go:65] Loading cluster: default-k8s-diff-port-892233
	I1031 00:07:27.228305  248242 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:07:27.228361  248242 stop.go:39] StopHost: default-k8s-diff-port-892233
	I1031 00:07:27.228772  248242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:07:27.228831  248242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:07:27.243134  248242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I1031 00:07:27.243621  248242 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:07:27.244195  248242 main.go:141] libmachine: Using API Version  1
	I1031 00:07:27.244230  248242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:07:27.244617  248242 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:07:27.247016  248242 out.go:177] * Stopping node "default-k8s-diff-port-892233"  ...
	I1031 00:07:27.248746  248242 main.go:141] libmachine: Stopping "default-k8s-diff-port-892233"...
	I1031 00:07:27.248766  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:07:27.250521  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Stop
	I1031 00:07:27.253730  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 0/60
	I1031 00:07:28.255652  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 1/60
	I1031 00:07:29.256963  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 2/60
	I1031 00:07:30.258312  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 3/60
	I1031 00:07:31.259815  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 4/60
	I1031 00:07:32.261954  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 5/60
	I1031 00:07:33.263481  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 6/60
	I1031 00:07:34.265033  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 7/60
	I1031 00:07:35.266676  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 8/60
	I1031 00:07:36.268247  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 9/60
	I1031 00:07:37.269628  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 10/60
	I1031 00:07:38.271151  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 11/60
	I1031 00:07:39.272595  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 12/60
	I1031 00:07:40.274031  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 13/60
	I1031 00:07:41.275504  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 14/60
	I1031 00:07:42.277186  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 15/60
	I1031 00:07:43.278561  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 16/60
	I1031 00:07:44.280023  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 17/60
	I1031 00:07:45.281714  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 18/60
	I1031 00:07:46.283336  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 19/60
	I1031 00:07:47.285552  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 20/60
	I1031 00:07:48.287562  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 21/60
	I1031 00:07:49.289097  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 22/60
	I1031 00:07:50.290548  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 23/60
	I1031 00:07:51.291941  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 24/60
	I1031 00:07:52.293982  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 25/60
	I1031 00:07:53.295235  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 26/60
	I1031 00:07:54.296507  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 27/60
	I1031 00:07:55.297827  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 28/60
	I1031 00:07:56.299222  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 29/60
	I1031 00:07:57.301366  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 30/60
	I1031 00:07:58.302694  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 31/60
	I1031 00:07:59.304096  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 32/60
	I1031 00:08:00.305688  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 33/60
	I1031 00:08:01.307170  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 34/60
	I1031 00:08:02.309121  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 35/60
	I1031 00:08:03.310612  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 36/60
	I1031 00:08:04.311872  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 37/60
	I1031 00:08:05.313178  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 38/60
	I1031 00:08:06.314472  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 39/60
	I1031 00:08:07.316695  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 40/60
	I1031 00:08:08.318111  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 41/60
	I1031 00:08:09.319480  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 42/60
	I1031 00:08:10.320782  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 43/60
	I1031 00:08:11.322875  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 44/60
	I1031 00:08:12.324883  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 45/60
	I1031 00:08:13.326197  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 46/60
	I1031 00:08:14.327888  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 47/60
	I1031 00:08:15.329460  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 48/60
	I1031 00:08:16.331164  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 49/60
	I1031 00:08:17.333415  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 50/60
	I1031 00:08:18.334783  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 51/60
	I1031 00:08:19.336058  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 52/60
	I1031 00:08:20.337627  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 53/60
	I1031 00:08:21.339121  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 54/60
	I1031 00:08:22.341233  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 55/60
	I1031 00:08:23.342605  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 56/60
	I1031 00:08:24.344033  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 57/60
	I1031 00:08:25.345459  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 58/60
	I1031 00:08:26.347581  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 59/60
	I1031 00:08:27.349109  248242 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1031 00:08:27.349190  248242 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:08:27.349254  248242 retry.go:31] will retry after 569.747407ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:08:27.920096  248242 stop.go:39] StopHost: default-k8s-diff-port-892233
	I1031 00:08:27.920666  248242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:08:27.920735  248242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:08:27.935519  248242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I1031 00:08:27.935931  248242 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:08:27.936463  248242 main.go:141] libmachine: Using API Version  1
	I1031 00:08:27.936491  248242 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:08:27.936891  248242 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:08:27.938946  248242 out.go:177] * Stopping node "default-k8s-diff-port-892233"  ...
	I1031 00:08:27.940330  248242 main.go:141] libmachine: Stopping "default-k8s-diff-port-892233"...
	I1031 00:08:27.940347  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:08:27.942276  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Stop
	I1031 00:08:27.946207  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 0/60
	I1031 00:08:28.947657  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 1/60
	I1031 00:08:29.949370  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 2/60
	I1031 00:08:30.950655  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 3/60
	I1031 00:08:31.952003  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 4/60
	I1031 00:08:32.953861  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 5/60
	I1031 00:08:33.955593  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 6/60
	I1031 00:08:34.957003  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 7/60
	I1031 00:08:35.958455  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 8/60
	I1031 00:08:36.959897  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 9/60
	I1031 00:08:37.961818  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 10/60
	I1031 00:08:38.963320  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 11/60
	I1031 00:08:39.964868  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 12/60
	I1031 00:08:40.966426  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 13/60
	I1031 00:08:41.967805  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 14/60
	I1031 00:08:42.969631  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 15/60
	I1031 00:08:43.971390  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 16/60
	I1031 00:08:44.972934  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 17/60
	I1031 00:08:45.974249  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 18/60
	I1031 00:08:46.975779  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 19/60
	I1031 00:08:47.977885  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 20/60
	I1031 00:08:48.979511  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 21/60
	I1031 00:08:49.981230  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 22/60
	I1031 00:08:50.982823  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 23/60
	I1031 00:08:51.984306  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 24/60
	I1031 00:08:52.986035  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 25/60
	I1031 00:08:53.987429  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 26/60
	I1031 00:08:54.988969  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 27/60
	I1031 00:08:55.990279  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 28/60
	I1031 00:08:56.991958  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 29/60
	I1031 00:08:57.993917  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 30/60
	I1031 00:08:58.995486  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 31/60
	I1031 00:08:59.997031  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 32/60
	I1031 00:09:00.998426  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 33/60
	I1031 00:09:01.999928  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 34/60
	I1031 00:09:03.001544  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 35/60
	I1031 00:09:04.003264  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 36/60
	I1031 00:09:05.004731  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 37/60
	I1031 00:09:06.006121  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 38/60
	I1031 00:09:07.007571  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 39/60
	I1031 00:09:08.009613  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 40/60
	I1031 00:09:09.011369  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 41/60
	I1031 00:09:10.012803  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 42/60
	I1031 00:09:11.014119  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 43/60
	I1031 00:09:12.015604  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 44/60
	I1031 00:09:13.017902  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 45/60
	I1031 00:09:14.019550  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 46/60
	I1031 00:09:15.021015  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 47/60
	I1031 00:09:16.022361  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 48/60
	I1031 00:09:17.024180  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 49/60
	I1031 00:09:18.026301  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 50/60
	I1031 00:09:19.027990  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 51/60
	I1031 00:09:20.029498  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 52/60
	I1031 00:09:21.030932  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 53/60
	I1031 00:09:22.032349  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 54/60
	I1031 00:09:23.033745  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 55/60
	I1031 00:09:24.035140  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 56/60
	I1031 00:09:25.036466  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 57/60
	I1031 00:09:26.037838  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 58/60
	I1031 00:09:27.039427  248242 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for machine to stop 59/60
	I1031 00:09:28.040506  248242 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1031 00:09:28.040557  248242 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1031 00:09:28.042256  248242 out.go:177] 
	W1031 00:09:28.043573  248242 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1031 00:09:28.043589  248242 out.go:239] * 
	* 
	W1031 00:09:28.047759  248242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 00:09:28.049106  248242 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-892233 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
E1031 00:09:30.631185  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233: exit status 3 (18.646783274s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:09:46.697345  248882 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host
	E1031 00:09:46.697368  248882 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892233" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155: exit status 3 (3.199762623s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:07:35.753365  248271 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.168:22: connect: no route to host
	E1031 00:07:35.753395  248271 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.168:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-640155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-640155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155792914s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.168:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-640155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155: exit status 3 (3.060928282s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:07:44.969328  248350 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.168:22: connect: no route to host
	E1031 00:07:44.969356  248350 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.168:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-640155" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843: exit status 3 (3.199637005s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:08:28.233349  248600 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host
	E1031 00:08:28.233370  248600 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-078843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-078843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154765948s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-078843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843: exit status 3 (3.061136688s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:08:37.449406  248676 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host
	E1031 00:08:37.449429  248676 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.2:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-078843" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233: exit status 3 (3.199542323s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:09:49.897334  248945 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host
	E1031 00:09:49.897360  248945 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-892233 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-892233 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155110017s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-892233 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233: exit status 3 (3.060810695s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 00:09:59.113404  249014 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host
	E1031 00:09:59.113433  249014 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-892233" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-078843 -n embed-certs-078843
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-31 00:26:26.066664029 +0000 UTC m=+5088.316680939
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-078843 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-078843 logs -n 25: (1.759554107s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-options-344463                                 | cert-options-344463          | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:02 UTC |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-225140        | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-640155             | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:06 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-078843            | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221554 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | disable-driver-mounts-221554                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:07 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-225140             | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:20 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-892233  | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-640155                  | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:22 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-078843                 | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC | 31 Oct 23 00:17 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-892233       | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC | 31 Oct 23 00:18 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:09:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:09:59.171110  249055 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:09:59.171372  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171383  249055 out.go:309] Setting ErrFile to fd 2...
	I1031 00:09:59.171387  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171591  249055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:09:59.172151  249055 out.go:303] Setting JSON to false
	I1031 00:09:59.173091  249055 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28351,"bootTime":1698682648,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:09:59.173154  249055 start.go:138] virtualization: kvm guest
	I1031 00:09:59.175712  249055 out.go:177] * [default-k8s-diff-port-892233] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:09:59.177218  249055 notify.go:220] Checking for updates...
	I1031 00:09:59.177238  249055 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:09:59.178590  249055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:09:59.179936  249055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:09:59.181243  249055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:09:59.182619  249055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:09:59.184021  249055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:09:59.185755  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:09:59.186187  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.186242  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.200537  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I1031 00:09:59.201002  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.201576  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.201596  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.201949  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.202159  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.202362  249055 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:09:59.202635  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.202680  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.216197  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I1031 00:09:59.216575  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.216998  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.217027  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.217349  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.217537  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.250565  249055 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 00:09:59.251974  249055 start.go:298] selected driver: kvm2
	I1031 00:09:59.251988  249055 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.252123  249055 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:09:59.253132  249055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.253220  249055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:09:59.266948  249055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:09:59.267297  249055 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 00:09:59.267362  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:09:59.267383  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:09:59.267401  249055 start_flags.go:323] config:
	{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-89223
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.267557  249055 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.269225  249055 out.go:177] * Starting control plane node default-k8s-diff-port-892233 in cluster default-k8s-diff-port-892233
	I1031 00:09:57.481224  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:00.553221  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:09:59.270407  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:09:59.270449  249055 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:09:59.270460  249055 cache.go:56] Caching tarball of preloaded images
	I1031 00:09:59.270553  249055 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:09:59.270569  249055 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:09:59.270702  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:09:59.270937  249055 start.go:365] acquiring machines lock for default-k8s-diff-port-892233: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:10:06.633217  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:09.705265  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:15.785240  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:18.857227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:24.937215  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:28.009292  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:34.089205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:37.161208  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:43.241288  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:46.313160  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:52.393273  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:55.465205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:01.545192  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:04.617227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:10.697233  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:13.769258  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:19.849250  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:22.921270  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:29.001178  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:32.073257  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:38.153271  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:41.225244  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:47.305235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:50.377235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:53.381665  248387 start.go:369] acquired machines lock for "no-preload-640155" in 4m7.945210729s
	I1031 00:11:53.381722  248387 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:11:53.381734  248387 fix.go:54] fixHost starting: 
	I1031 00:11:53.382372  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:11:53.382418  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:11:53.397155  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1031 00:11:53.397704  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:11:53.398181  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:11:53.398206  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:11:53.398561  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:11:53.398761  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:11:53.398909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:11:53.400611  248387 fix.go:102] recreateIfNeeded on no-preload-640155: state=Stopped err=<nil>
	I1031 00:11:53.400634  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	W1031 00:11:53.400782  248387 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:11:53.402394  248387 out.go:177] * Restarting existing kvm2 VM for "no-preload-640155" ...
	I1031 00:11:53.403767  248387 main.go:141] libmachine: (no-preload-640155) Calling .Start
	I1031 00:11:53.403944  248387 main.go:141] libmachine: (no-preload-640155) Ensuring networks are active...
	I1031 00:11:53.404678  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network default is active
	I1031 00:11:53.405127  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network mk-no-preload-640155 is active
	I1031 00:11:53.405642  248387 main.go:141] libmachine: (no-preload-640155) Getting domain xml...
	I1031 00:11:53.406300  248387 main.go:141] libmachine: (no-preload-640155) Creating domain...
	I1031 00:11:54.646418  248387 main.go:141] libmachine: (no-preload-640155) Waiting to get IP...
	I1031 00:11:54.647560  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.647956  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.648034  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.647947  249366 retry.go:31] will retry after 237.521879ms: waiting for machine to come up
	I1031 00:11:54.887446  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.887861  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.887895  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.887804  249366 retry.go:31] will retry after 320.996838ms: waiting for machine to come up
	I1031 00:11:53.379251  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:11:53.379302  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:11:53.381458  248084 machine.go:91] provisioned docker machine in 4m37.397131013s
	I1031 00:11:53.381513  248084 fix.go:56] fixHost completed within 4m37.420319931s
	I1031 00:11:53.381528  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 4m37.420354195s
	W1031 00:11:53.381569  248084 start.go:691] error starting host: provision: host is not running
	W1031 00:11:53.381676  248084 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1031 00:11:53.381687  248084 start.go:706] Will try again in 5 seconds ...
	I1031 00:11:55.210309  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.210784  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.210818  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.210728  249366 retry.go:31] will retry after 412.198071ms: waiting for machine to come up
	I1031 00:11:55.624299  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.624689  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.624721  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.624647  249366 retry.go:31] will retry after 596.339141ms: waiting for machine to come up
	I1031 00:11:56.222381  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.222918  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.222952  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.222864  249366 retry.go:31] will retry after 640.775314ms: waiting for machine to come up
	I1031 00:11:56.865881  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.866355  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.866394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.866321  249366 retry.go:31] will retry after 797.697217ms: waiting for machine to come up
	I1031 00:11:57.665413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:57.665930  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:57.665971  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:57.665871  249366 retry.go:31] will retry after 808.934364ms: waiting for machine to come up
	I1031 00:11:58.476161  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:58.476620  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:58.476651  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:58.476582  249366 retry.go:31] will retry after 1.198576442s: waiting for machine to come up
	I1031 00:11:59.676957  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:59.677540  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:59.677575  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:59.677462  249366 retry.go:31] will retry after 1.122967081s: waiting for machine to come up
	I1031 00:11:58.383586  248084 start.go:365] acquiring machines lock for old-k8s-version-225140: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:12:00.801790  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:00.802278  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:00.802313  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:00.802216  249366 retry.go:31] will retry after 2.182263229s: waiting for machine to come up
	I1031 00:12:02.987870  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:02.988307  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:02.988339  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:02.988235  249366 retry.go:31] will retry after 2.73312352s: waiting for machine to come up
	I1031 00:12:05.723196  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:05.723664  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:05.723695  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:05.723595  249366 retry.go:31] will retry after 2.33306923s: waiting for machine to come up
	I1031 00:12:08.060086  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:08.060364  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:08.060394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:08.060328  249366 retry.go:31] will retry after 2.770780436s: waiting for machine to come up
	I1031 00:12:10.834601  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:10.834995  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:10.835020  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:10.834939  249366 retry.go:31] will retry after 4.389090657s: waiting for machine to come up
	I1031 00:12:16.389786  248718 start.go:369] acquired machines lock for "embed-certs-078843" in 3m38.778041195s
	I1031 00:12:16.389855  248718 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:16.389864  248718 fix.go:54] fixHost starting: 
	I1031 00:12:16.390317  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:16.390362  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:16.407875  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I1031 00:12:16.408273  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:16.408842  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:12:16.408870  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:16.409226  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:16.409404  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:16.409574  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:12:16.410975  248718 fix.go:102] recreateIfNeeded on embed-certs-078843: state=Stopped err=<nil>
	I1031 00:12:16.411013  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	W1031 00:12:16.411196  248718 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:16.413529  248718 out.go:177] * Restarting existing kvm2 VM for "embed-certs-078843" ...
	I1031 00:12:16.414858  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Start
	I1031 00:12:16.415041  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring networks are active...
	I1031 00:12:16.415738  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network default is active
	I1031 00:12:16.416116  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network mk-embed-certs-078843 is active
	I1031 00:12:16.416450  248718 main.go:141] libmachine: (embed-certs-078843) Getting domain xml...
	I1031 00:12:16.417190  248718 main.go:141] libmachine: (embed-certs-078843) Creating domain...
	I1031 00:12:15.226912  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227453  248387 main.go:141] libmachine: (no-preload-640155) Found IP for machine: 192.168.61.168
	I1031 00:12:15.227473  248387 main.go:141] libmachine: (no-preload-640155) Reserving static IP address...
	I1031 00:12:15.227513  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has current primary IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227861  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.227890  248387 main.go:141] libmachine: (no-preload-640155) DBG | skip adding static IP to network mk-no-preload-640155 - found existing host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"}
	I1031 00:12:15.227900  248387 main.go:141] libmachine: (no-preload-640155) Reserved static IP address: 192.168.61.168
	I1031 00:12:15.227919  248387 main.go:141] libmachine: (no-preload-640155) Waiting for SSH to be available...
	I1031 00:12:15.227938  248387 main.go:141] libmachine: (no-preload-640155) DBG | Getting to WaitForSSH function...
	I1031 00:12:15.230076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230450  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.230556  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230578  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH client type: external
	I1031 00:12:15.230601  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa (-rw-------)
	I1031 00:12:15.230646  248387 main.go:141] libmachine: (no-preload-640155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:15.230666  248387 main.go:141] libmachine: (no-preload-640155) DBG | About to run SSH command:
	I1031 00:12:15.230678  248387 main.go:141] libmachine: (no-preload-640155) DBG | exit 0
	I1031 00:12:15.316515  248387 main.go:141] libmachine: (no-preload-640155) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:15.316855  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetConfigRaw
	I1031 00:12:15.317658  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.320306  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.320647  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.320679  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.321008  248387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/config.json ...
	I1031 00:12:15.321252  248387 machine.go:88] provisioning docker machine ...
	I1031 00:12:15.321275  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:15.321492  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321669  248387 buildroot.go:166] provisioning hostname "no-preload-640155"
	I1031 00:12:15.321691  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321858  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.324151  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324480  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.324518  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.324849  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325057  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325237  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.325416  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.325795  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.325815  248387 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-640155 && echo "no-preload-640155" | sudo tee /etc/hostname
	I1031 00:12:15.450048  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-640155
	
	I1031 00:12:15.450079  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.452951  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453298  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.453344  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.453657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453800  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453899  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.454055  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.454540  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.454569  248387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-640155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-640155/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-640155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:15.574041  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:15.574072  248387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:15.574104  248387 buildroot.go:174] setting up certificates
	I1031 00:12:15.574116  248387 provision.go:83] configureAuth start
	I1031 00:12:15.574125  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.574451  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.577558  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578020  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.578059  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578197  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.580453  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.580832  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.580876  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.581078  248387 provision.go:138] copyHostCerts
	I1031 00:12:15.581171  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:15.581184  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:15.581256  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:15.581407  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:15.581420  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:15.581453  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:15.581522  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:15.581530  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:15.581560  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:15.581611  248387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.no-preload-640155 san=[192.168.61.168 192.168.61.168 localhost 127.0.0.1 minikube no-preload-640155]
	I1031 00:12:15.693832  248387 provision.go:172] copyRemoteCerts
	I1031 00:12:15.693906  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:15.693934  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.696811  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697210  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.697258  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697471  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.697683  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.697870  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.698054  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:15.781207  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:15.803665  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:15.826369  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:12:15.849259  248387 provision.go:86] duration metric: configureAuth took 275.127597ms
	I1031 00:12:15.849292  248387 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:15.849476  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:15.849565  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.852413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.852804  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.852848  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.853027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.853227  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853440  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853549  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.853724  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.854104  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.854132  248387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:16.147033  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:16.147078  248387 machine.go:91] provisioned docker machine in 825.808812ms
	I1031 00:12:16.147094  248387 start.go:300] post-start starting for "no-preload-640155" (driver="kvm2")
	I1031 00:12:16.147110  248387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:16.147138  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.147515  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:16.147545  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.150321  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150755  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.150798  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.151155  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.151335  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.151493  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.237897  248387 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:16.242343  248387 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:16.242367  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:16.242440  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:16.242526  248387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:16.242636  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:16.250454  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:16.273390  248387 start.go:303] post-start completed in 126.280341ms
	I1031 00:12:16.273411  248387 fix.go:56] fixHost completed within 22.891678533s
	I1031 00:12:16.273433  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.276291  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276598  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.276630  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276761  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.276989  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277270  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277434  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.277621  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:16.277984  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:16.277998  248387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:16.389581  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711136.336935137
	
	I1031 00:12:16.389607  248387 fix.go:206] guest clock: 1698711136.336935137
	I1031 00:12:16.389621  248387 fix.go:219] Guest: 2023-10-31 00:12:16.336935137 +0000 UTC Remote: 2023-10-31 00:12:16.273414732 +0000 UTC m=+271.294357841 (delta=63.520405ms)
	I1031 00:12:16.389652  248387 fix.go:190] guest clock delta is within tolerance: 63.520405ms
	I1031 00:12:16.389659  248387 start.go:83] releasing machines lock for "no-preload-640155", held for 23.007957251s
	I1031 00:12:16.389694  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.390027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:16.392988  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393466  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.393493  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393639  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394137  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394306  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394401  248387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:16.394449  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.394583  248387 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:16.394619  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.397387  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397690  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397757  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.397785  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397927  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398140  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398174  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.398206  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.398296  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398503  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.398616  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398784  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398936  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.520353  248387 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:16.526647  248387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:16.673048  248387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:16.679657  248387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:16.679738  248387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:16.699616  248387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:16.699643  248387 start.go:472] detecting cgroup driver to use...
	I1031 00:12:16.699706  248387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:16.717466  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:16.729231  248387 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:16.729300  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:16.741665  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:16.754175  248387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:16.855649  248387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:16.990153  248387 docker.go:214] disabling docker service ...
	I1031 00:12:16.990239  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:17.004614  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:17.017251  248387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:17.143006  248387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:17.257321  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:17.271045  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:17.288903  248387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:17.289001  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.298419  248387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:17.298516  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.308045  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.317176  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.327039  248387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:17.337269  248387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:17.345814  248387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:17.345886  248387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:17.359110  248387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:17.369376  248387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:17.480359  248387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:17.658006  248387 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:17.658099  248387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:17.663296  248387 start.go:540] Will wait 60s for crictl version
	I1031 00:12:17.663467  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:17.667483  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:17.709866  248387 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:17.709956  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.757817  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.812918  248387 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:17.814541  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:17.818008  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818445  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:17.818482  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818745  248387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:17.822914  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:17.837885  248387 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:17.837941  248387 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:17.874977  248387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:17.875010  248387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:12:17.875097  248387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.875104  248387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.875130  248387 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.875163  248387 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1031 00:12:17.875181  248387 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.875233  248387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.875297  248387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.875306  248387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876689  248387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.876731  248387 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.876696  248387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.876697  248387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.876695  248387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.876704  248387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1031 00:12:18.053090  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.059240  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1031 00:12:18.059239  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.065016  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.069953  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.071229  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.140026  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.149728  248387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1031 00:12:18.149778  248387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.149835  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.172611  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.238794  248387 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1031 00:12:18.238851  248387 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.238913  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331173  248387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1031 00:12:18.331228  248387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.331279  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331278  248387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1031 00:12:18.331370  248387 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1031 00:12:18.331380  248387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.331401  248387 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.331425  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331441  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331463  248387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1031 00:12:18.331503  248387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.331542  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.331584  248387 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1031 00:12:18.331632  248387 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.331665  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331545  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331591  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.348470  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.348506  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.348570  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.348619  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.484280  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.484369  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1031 00:12:18.484436  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1031 00:12:18.484501  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:18.484532  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.513117  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1031 00:12:18.513211  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1031 00:12:18.513238  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:18.513264  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1031 00:12:18.513307  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:18.513347  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:18.513392  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1031 00:12:18.513515  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:18.541278  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1031 00:12:18.541307  248387 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541340  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1031 00:12:18.541348  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1031 00:12:18.541370  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541416  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1031 00:12:18.541466  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:18.541493  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1031 00:12:18.541547  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1031 00:12:18.541549  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1031 00:12:17.727796  248718 main.go:141] libmachine: (embed-certs-078843) Waiting to get IP...
	I1031 00:12:17.728716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:17.729132  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:17.729165  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:17.729087  249483 retry.go:31] will retry after 294.663443ms: waiting for machine to come up
	I1031 00:12:18.025671  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.026112  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.026145  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.026058  249483 retry.go:31] will retry after 377.887631ms: waiting for machine to come up
	I1031 00:12:18.405434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.405878  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.405961  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.405857  249483 retry.go:31] will retry after 459.989463ms: waiting for machine to come up
	I1031 00:12:18.867094  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.867658  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.867693  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.867590  249483 retry.go:31] will retry after 552.876869ms: waiting for machine to come up
	I1031 00:12:19.422232  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.422678  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.422711  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.422642  249483 retry.go:31] will retry after 574.514705ms: waiting for machine to come up
	I1031 00:12:19.998587  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.999158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.999195  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.999071  249483 retry.go:31] will retry after 903.246228ms: waiting for machine to come up
	I1031 00:12:20.904654  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:20.905083  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:20.905118  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:20.905028  249483 retry.go:31] will retry after 1.161301577s: waiting for machine to come up
	I1031 00:12:22.067416  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:22.067874  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:22.067906  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:22.067843  249483 retry.go:31] will retry after 1.350619049s: waiting for machine to come up
	I1031 00:12:23.419771  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:23.420313  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:23.420343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:23.420276  249483 retry.go:31] will retry after 1.783701579s: waiting for machine to come up
	I1031 00:12:25.206301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:25.206880  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:25.206909  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:25.206820  249483 retry.go:31] will retry after 2.304762715s: waiting for machine to come up
	I1031 00:12:25.834889  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.293473845s)
	I1031 00:12:25.834930  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1031 00:12:25.834949  248387 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (7.293455157s)
	I1031 00:12:25.834967  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:25.834986  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1031 00:12:25.835039  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:28.718454  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.883305744s)
	I1031 00:12:28.718498  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1031 00:12:28.718536  248387 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:28.718602  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:27.513250  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:27.513691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:27.513726  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:27.513617  249483 retry.go:31] will retry after 2.77005827s: waiting for machine to come up
	I1031 00:12:30.287716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:30.288125  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:30.288181  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:30.288095  249483 retry.go:31] will retry after 2.359494113s: waiting for machine to come up
	I1031 00:12:30.082206  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.363538098s)
	I1031 00:12:30.082241  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1031 00:12:30.082284  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:30.082378  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:32.754830  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.672412397s)
	I1031 00:12:32.754865  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1031 00:12:32.754922  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:32.755008  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:34.104402  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.3493522s)
	I1031 00:12:34.104443  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1031 00:12:34.104484  248387 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:34.104528  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:36.966451  249055 start.go:369] acquired machines lock for "default-k8s-diff-port-892233" in 2m37.695455763s
	I1031 00:12:36.966568  249055 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:36.966579  249055 fix.go:54] fixHost starting: 
	I1031 00:12:36.966927  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:36.966965  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:36.985392  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I1031 00:12:36.985889  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:36.986473  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:12:36.986501  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:36.986870  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:36.987100  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:36.987295  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:12:36.989416  249055 fix.go:102] recreateIfNeeded on default-k8s-diff-port-892233: state=Stopped err=<nil>
	I1031 00:12:36.989470  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	W1031 00:12:36.989641  249055 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:36.991746  249055 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-892233" ...
	I1031 00:12:32.648970  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:32.649516  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:32.649563  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:32.649477  249483 retry.go:31] will retry after 2.827972253s: waiting for machine to come up
	I1031 00:12:35.479127  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479655  248718 main.go:141] libmachine: (embed-certs-078843) Found IP for machine: 192.168.50.2
	I1031 00:12:35.479691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has current primary IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479703  248718 main.go:141] libmachine: (embed-certs-078843) Reserving static IP address...
	I1031 00:12:35.480200  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.480259  248718 main.go:141] libmachine: (embed-certs-078843) DBG | skip adding static IP to network mk-embed-certs-078843 - found existing host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"}
	I1031 00:12:35.480299  248718 main.go:141] libmachine: (embed-certs-078843) Reserved static IP address: 192.168.50.2
	I1031 00:12:35.480319  248718 main.go:141] libmachine: (embed-certs-078843) Waiting for SSH to be available...
	I1031 00:12:35.480334  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Getting to WaitForSSH function...
	I1031 00:12:35.482640  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483140  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.483177  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH client type: external
	I1031 00:12:35.483373  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa (-rw-------)
	I1031 00:12:35.483409  248718 main.go:141] libmachine: (embed-certs-078843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:35.483434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | About to run SSH command:
	I1031 00:12:35.483453  248718 main.go:141] libmachine: (embed-certs-078843) DBG | exit 0
	I1031 00:12:35.573283  248718 main.go:141] libmachine: (embed-certs-078843) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:35.573731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetConfigRaw
	I1031 00:12:35.574538  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.577369  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.577820  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.577856  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.578175  248718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/config.json ...
	I1031 00:12:35.578461  248718 machine.go:88] provisioning docker machine ...
	I1031 00:12:35.578486  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:35.578719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.578919  248718 buildroot.go:166] provisioning hostname "embed-certs-078843"
	I1031 00:12:35.578946  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.579137  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.581632  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582041  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.582075  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582185  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.582376  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582556  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582694  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.582864  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.583247  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.583268  248718 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-078843 && echo "embed-certs-078843" | sudo tee /etc/hostname
	I1031 00:12:35.717684  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-078843
	
	I1031 00:12:35.717719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.720882  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721264  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.721299  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721514  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.721732  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.721908  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.722057  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.722318  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.722757  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.722777  248718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-078843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-078843/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-078843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:35.865568  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:35.865626  248718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:35.865667  248718 buildroot.go:174] setting up certificates
	I1031 00:12:35.865682  248718 provision.go:83] configureAuth start
	I1031 00:12:35.865696  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.866070  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.869149  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869571  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.869610  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.872260  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872618  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.872665  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872855  248718 provision.go:138] copyHostCerts
	I1031 00:12:35.872978  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:35.873000  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:35.873069  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:35.873192  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:35.873203  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:35.873234  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:35.873316  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:35.873327  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:35.873352  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:35.873426  248718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.embed-certs-078843 san=[192.168.50.2 192.168.50.2 localhost 127.0.0.1 minikube embed-certs-078843]
	I1031 00:12:36.016430  248718 provision.go:172] copyRemoteCerts
	I1031 00:12:36.016506  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:36.016553  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.019662  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020054  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.020088  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020286  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.020505  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.020658  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.020843  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.111793  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:36.140569  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:36.179708  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:12:36.203348  248718 provision.go:86] duration metric: configureAuth took 337.646698ms
	I1031 00:12:36.203385  248718 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:36.203690  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:36.203835  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.207444  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.207883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.207923  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.208236  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.208498  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208690  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208912  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.209163  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.209521  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.209547  248718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:36.711502  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:36.711535  248718 machine.go:91] provisioned docker machine in 1.133056882s
	I1031 00:12:36.711550  248718 start.go:300] post-start starting for "embed-certs-078843" (driver="kvm2")
	I1031 00:12:36.711563  248718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:36.711587  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.711984  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:36.712027  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.714954  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715374  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.715408  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715610  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.715815  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.716019  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.716192  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.803613  248718 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:36.808855  248718 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:36.808888  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:36.808973  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:36.809100  248718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:36.809240  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:36.818339  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:36.845738  248718 start.go:303] post-start completed in 134.172265ms
	I1031 00:12:36.845765  248718 fix.go:56] fixHost completed within 20.4559017s
	I1031 00:12:36.845788  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.848249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848592  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.848621  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848861  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.849120  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849307  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849462  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.849659  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.850033  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.850047  248718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:36.966267  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711156.912809532
	
	I1031 00:12:36.966293  248718 fix.go:206] guest clock: 1698711156.912809532
	I1031 00:12:36.966303  248718 fix.go:219] Guest: 2023-10-31 00:12:36.912809532 +0000 UTC Remote: 2023-10-31 00:12:36.845768911 +0000 UTC m=+239.388163644 (delta=67.040621ms)
	I1031 00:12:36.966329  248718 fix.go:190] guest clock delta is within tolerance: 67.040621ms
	I1031 00:12:36.966341  248718 start.go:83] releasing machines lock for "embed-certs-078843", held for 20.576516085s
	I1031 00:12:36.966380  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.967388  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:36.970301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970734  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.970766  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970934  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971468  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971683  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971781  248718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:36.971832  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.971921  248718 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:36.971951  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.974873  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975244  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975323  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975420  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975692  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975718  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975759  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975901  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975959  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976068  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976221  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976279  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976358  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.977011  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:37.095751  248718 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:37.101600  248718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:37.244676  248718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:37.253623  248718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:37.253702  248718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:37.272872  248718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:37.272897  248718 start.go:472] detecting cgroup driver to use...
	I1031 00:12:37.272992  248718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:37.290899  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:37.306570  248718 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:37.306633  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:37.321827  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:37.336787  248718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:37.451589  248718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:37.571290  248718 docker.go:214] disabling docker service ...
	I1031 00:12:37.571375  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:37.587764  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:37.600627  248718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:37.733539  248718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:37.850154  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:37.865463  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:37.883661  248718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:37.883728  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.892717  248718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:37.892783  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.901944  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.911061  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.920094  248718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:37.929520  248718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:37.937333  248718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:37.937404  248718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:37.949591  248718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:37.960061  248718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:38.076354  248718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:38.250618  248718 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:38.250688  248718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:38.255979  248718 start.go:540] Will wait 60s for crictl version
	I1031 00:12:38.256036  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:12:38.259822  248718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:38.299812  248718 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:38.299981  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.343088  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.397252  248718 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:36.993369  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Start
	I1031 00:12:36.993641  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring networks are active...
	I1031 00:12:36.994545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network default is active
	I1031 00:12:36.994911  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network mk-default-k8s-diff-port-892233 is active
	I1031 00:12:36.995448  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Getting domain xml...
	I1031 00:12:36.996378  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Creating domain...
	I1031 00:12:38.342502  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting to get IP...
	I1031 00:12:38.343505  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344038  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.344004  249635 retry.go:31] will retry after 206.530958ms: waiting for machine to come up
	I1031 00:12:38.552789  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553109  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.553059  249635 retry.go:31] will retry after 272.962928ms: waiting for machine to come up
	I1031 00:12:38.827741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828288  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.828242  249635 retry.go:31] will retry after 411.85264ms: waiting for machine to come up
	I1031 00:12:35.048294  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1031 00:12:35.048344  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:35.048404  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:36.902739  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.854307965s)
	I1031 00:12:36.902771  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1031 00:12:36.902803  248387 cache_images.go:123] Successfully loaded all cached images
	I1031 00:12:36.902810  248387 cache_images.go:92] LoadImages completed in 19.027785915s
	I1031 00:12:36.902926  248387 ssh_runner.go:195] Run: crio config
	I1031 00:12:36.961891  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:36.961922  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:36.961950  248387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:36.961992  248387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.168 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-640155 NodeName:no-preload-640155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:36.962203  248387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-640155"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:36.962312  248387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-640155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:36.962389  248387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:36.973945  248387 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:36.974026  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:36.987534  248387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1031 00:12:37.006510  248387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:37.025092  248387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1031 00:12:37.045090  248387 ssh_runner.go:195] Run: grep 192.168.61.168	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:37.048822  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:37.061985  248387 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155 for IP: 192.168.61.168
	I1031 00:12:37.062026  248387 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:37.062243  248387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:37.062310  248387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:37.062410  248387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.key
	I1031 00:12:37.062508  248387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key.96e3443b
	I1031 00:12:37.062570  248387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key
	I1031 00:12:37.062707  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:37.062750  248387 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:37.062767  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:37.062832  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:37.062877  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:37.062923  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:37.062987  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:37.063745  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:37.090011  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:37.119401  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:37.148361  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:12:37.173730  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:37.197769  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:37.221625  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:37.244497  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:37.274559  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:37.300372  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:37.332082  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:37.361826  248387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:37.380561  248387 ssh_runner.go:195] Run: openssl version
	I1031 00:12:37.386185  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:37.396710  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401896  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401983  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.407778  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:37.418091  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:37.427985  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432581  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432649  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.438103  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:37.447792  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:37.457689  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462421  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462495  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.468482  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:37.478565  248387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:37.483264  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:37.491175  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:37.498212  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:37.504019  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:37.509730  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:37.516218  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:37.523364  248387 kubeadm.go:404] StartCluster: {Name:no-preload-640155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:37.523465  248387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:37.523522  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:37.576223  248387 cri.go:89] found id: ""
	I1031 00:12:37.576314  248387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:37.586094  248387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:37.586133  248387 kubeadm.go:636] restartCluster start
	I1031 00:12:37.586217  248387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:37.595614  248387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.596791  248387 kubeconfig.go:92] found "no-preload-640155" server: "https://192.168.61.168:8443"
	I1031 00:12:37.600710  248387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:37.610066  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.610137  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.620501  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.620528  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.620578  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.630477  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.131205  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.131335  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.144627  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.631491  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.631587  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.647034  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.131616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.131749  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.148723  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.631171  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.631273  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.645807  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.398862  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:38.401804  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:38.402193  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402475  248718 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:38.407041  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:38.421147  248718 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:38.421228  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:38.461162  248718 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:38.461240  248718 ssh_runner.go:195] Run: which lz4
	I1031 00:12:38.465401  248718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:12:38.469796  248718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:12:38.469833  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:12:40.419642  248718 crio.go:444] Took 1.954260 seconds to copy over tarball
	I1031 00:12:40.419721  248718 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:12:39.241872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242407  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242465  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.242347  249635 retry.go:31] will retry after 371.774477ms: waiting for machine to come up
	I1031 00:12:39.616171  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616708  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616747  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.616671  249635 retry.go:31] will retry after 487.120901ms: waiting for machine to come up
	I1031 00:12:40.105492  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106116  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106151  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.106066  249635 retry.go:31] will retry after 767.19349ms: waiting for machine to come up
	I1031 00:12:40.875432  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.875932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.876009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.875892  249635 retry.go:31] will retry after 976.411998ms: waiting for machine to come up
	I1031 00:12:41.854227  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854759  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854794  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:41.854691  249635 retry.go:31] will retry after 1.041793781s: waiting for machine to come up
	I1031 00:12:42.898223  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898628  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898658  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:42.898577  249635 retry.go:31] will retry after 1.163252223s: waiting for machine to come up
	I1031 00:12:44.064217  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064593  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064626  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:44.064543  249635 retry.go:31] will retry after 1.879015473s: waiting for machine to come up
	I1031 00:12:40.131216  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.131331  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.146846  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:40.630673  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.630747  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.642955  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.131275  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.131410  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.144530  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.631108  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.631219  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.645873  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.131506  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.131641  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.147504  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.630664  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.630769  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.645755  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.131375  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.131503  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.143357  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.631616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.631714  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.647203  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.130693  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.130791  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.143566  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.630736  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.630816  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.642486  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.535831  248718 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.116078442s)
	I1031 00:12:43.535864  248718 crio.go:451] Took 3.116189 seconds to extract the tarball
	I1031 00:12:43.535877  248718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:12:43.579902  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:43.635701  248718 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:12:43.635724  248718 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:12:43.635796  248718 ssh_runner.go:195] Run: crio config
	I1031 00:12:43.714916  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:12:43.714939  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:43.714958  248718 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:43.714976  248718 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-078843 NodeName:embed-certs-078843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:43.715146  248718 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-078843"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:43.715232  248718 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-078843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:43.715295  248718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:43.726847  248718 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:43.726938  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:43.738352  248718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1031 00:12:43.756439  248718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:43.773955  248718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1031 00:12:43.793790  248718 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:43.798155  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:43.811602  248718 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843 for IP: 192.168.50.2
	I1031 00:12:43.811649  248718 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:43.811819  248718 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:43.811877  248718 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:43.811963  248718 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/client.key
	I1031 00:12:43.812051  248718 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key.e10f976c
	I1031 00:12:43.812117  248718 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key
	I1031 00:12:43.812261  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:43.812301  248718 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:43.812317  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:43.812359  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:43.812395  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:43.812430  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:43.812491  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:43.813192  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:43.841097  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:43.867995  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:43.892834  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:12:43.917649  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:43.942299  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:43.971154  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:43.995032  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:44.022277  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:44.047549  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:44.071370  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:44.095933  248718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:44.113479  248718 ssh_runner.go:195] Run: openssl version
	I1031 00:12:44.119266  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:44.133710  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140098  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140180  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.146416  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:44.159207  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:44.171618  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178288  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178375  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.186339  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:44.200864  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:44.212513  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217549  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217616  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.225170  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:44.239600  248718 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:44.244470  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:44.252637  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:44.260635  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:44.269017  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:44.277210  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:44.285394  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:44.293419  248718 kubeadm.go:404] StartCluster: {Name:embed-certs-078843 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:44.293507  248718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:44.293620  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:44.339212  248718 cri.go:89] found id: ""
	I1031 00:12:44.339302  248718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:44.350219  248718 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:44.350249  248718 kubeadm.go:636] restartCluster start
	I1031 00:12:44.350315  248718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:44.360185  248718 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.361826  248718 kubeconfig.go:92] found "embed-certs-078843" server: "https://192.168.50.2:8443"
	I1031 00:12:44.365579  248718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:44.376923  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.377021  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.390684  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.390708  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.390768  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.404614  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.905332  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.905451  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.918162  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.405760  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.405845  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.419071  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.905669  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.905770  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.922243  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.404757  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.404870  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.419662  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.905223  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.905328  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.919993  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.405571  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.405660  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.418433  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.944837  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945386  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945422  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:45.945318  249635 retry.go:31] will retry after 1.840120385s: waiting for machine to come up
	I1031 00:12:47.787276  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787807  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:47.787751  249635 retry.go:31] will retry after 2.306470153s: waiting for machine to come up
	I1031 00:12:45.131185  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.225229  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.237425  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.630872  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.630948  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.644580  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.131199  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.131280  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.142872  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.631467  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.631545  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.648339  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.130861  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.131000  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.146189  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.610939  248387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:47.610999  248387 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:47.611016  248387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:47.611107  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:47.656888  248387 cri.go:89] found id: ""
	I1031 00:12:47.656982  248387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:47.678724  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:47.688879  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:47.688985  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697091  248387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697115  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:47.837056  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.448497  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.639877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.735406  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.824428  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:48.824521  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:48.840207  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.357050  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.857029  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:47.905449  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.905552  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.921939  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.405557  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.405656  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.417674  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.905114  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.905225  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.919218  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.404811  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.404908  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.420062  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.905655  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.905769  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.922828  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.405471  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.405578  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.423259  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.904727  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.904819  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.920673  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.405155  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.405246  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.421731  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.905024  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.905101  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.919385  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:52.404843  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.404985  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.420088  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.095827  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096365  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:50.096281  249635 retry.go:31] will retry after 3.872051375s: waiting for machine to come up
	I1031 00:12:53.970393  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970918  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970956  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:53.970839  249635 retry.go:31] will retry after 5.345847198s: waiting for machine to come up
	I1031 00:12:50.357101  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:50.857024  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.357298  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.380143  248387 api_server.go:72] duration metric: took 2.555721824s to wait for apiserver process to appear ...
	I1031 00:12:51.380180  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:51.380220  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.457683  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.457719  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:54.457733  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.509385  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.509424  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:55.010185  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.017172  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.017201  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:55.510171  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.517062  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.517114  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:56.009671  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:56.017135  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:12:56.026278  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:12:56.026307  248387 api_server.go:131] duration metric: took 4.646117858s to wait for apiserver health ...
	I1031 00:12:56.026319  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:56.026331  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:56.028208  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:12:52.904735  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.904835  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.917320  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.405426  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.405546  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.420386  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.904921  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.905039  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.917303  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:54.377921  248718 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:54.377976  248718 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:54.377991  248718 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:54.378079  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:54.418685  248718 cri.go:89] found id: ""
	I1031 00:12:54.418768  248718 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:54.436536  248718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:54.451466  248718 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:54.451534  248718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464460  248718 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464484  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:54.601286  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.468262  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.664604  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.761171  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.838690  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:55.838793  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:55.857817  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.379368  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.878782  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:57.379756  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.029552  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:12:56.078774  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:12:56.128262  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:12:56.139995  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:12:56.140025  248387 system_pods.go:61] "coredns-5dd5756b68-qbvjb" [92f771d8-381b-4e38-945f-ad5ceae72b80] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:12:56.140035  248387 system_pods.go:61] "etcd-no-preload-640155" [44fcbc32-757b-4406-97ed-88ad76ae4eee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:12:56.140042  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [b92b3dec-827f-4221-8c28-83a738186e52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:12:56.140048  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [62661788-bde2-42b9-9469-a2f2c51ee283] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:12:56.140057  248387 system_pods.go:61] "kube-proxy-rv76j" [293b1dd9-fc85-4647-91c9-874ad357d222] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:12:56.140063  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [6a11d962-b407-467e-b8a0-9a101b16e4d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:12:56.140076  248387 system_pods.go:61] "metrics-server-57f55c9bc5-nm8dj" [3924727e-2734-497d-b1b1-d8f9a0ab095a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:12:56.140090  248387 system_pods.go:61] "storage-provisioner" [f8e0a3fa-eaf1-45e1-afbc-a5b2287e7703] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:12:56.140100  248387 system_pods.go:74] duration metric: took 11.816257ms to wait for pod list to return data ...
	I1031 00:12:56.140110  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:12:56.143298  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:12:56.143327  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:12:56.143365  248387 node_conditions.go:105] duration metric: took 3.247248ms to run NodePressure ...
	I1031 00:12:56.143402  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:56.398227  248387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403101  248387 kubeadm.go:787] kubelet initialised
	I1031 00:12:56.403124  248387 kubeadm.go:788] duration metric: took 4.866042ms waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403134  248387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:12:56.408758  248387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.416185  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416218  248387 pod_ready.go:81] duration metric: took 7.431969ms waiting for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.416229  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416238  248387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.421589  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421611  248387 pod_ready.go:81] duration metric: took 5.364261ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.421619  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421624  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.427046  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427075  248387 pod_ready.go:81] duration metric: took 5.443698ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.427086  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427098  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.534169  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534224  248387 pod_ready.go:81] duration metric: took 107.102474ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.534241  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534255  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332793  248387 pod_ready.go:92] pod "kube-proxy-rv76j" in "kube-system" namespace has status "Ready":"True"
	I1031 00:12:57.332824  248387 pod_ready.go:81] duration metric: took 798.55794ms waiting for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332838  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:59.642186  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:00.818958  248084 start.go:369] acquired machines lock for "old-k8s-version-225140" in 1m2.435313483s
	I1031 00:13:00.819017  248084 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:13:00.819032  248084 fix.go:54] fixHost starting: 
	I1031 00:13:00.819456  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:00.819490  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:00.838737  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1031 00:13:00.839191  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:00.839773  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:13:00.839794  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:00.840290  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:00.840514  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:00.840697  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:13:00.843346  248084 fix.go:102] recreateIfNeeded on old-k8s-version-225140: state=Stopped err=<nil>
	I1031 00:13:00.843381  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	W1031 00:13:00.843658  248084 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:13:00.848997  248084 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-225140" ...
	I1031 00:12:59.318443  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319011  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Found IP for machine: 192.168.39.2
	I1031 00:12:59.319037  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserving static IP address...
	I1031 00:12:59.319070  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has current primary IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319522  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.319557  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserved static IP address: 192.168.39.2
	I1031 00:12:59.319595  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | skip adding static IP to network mk-default-k8s-diff-port-892233 - found existing host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"}
	I1031 00:12:59.319620  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Getting to WaitForSSH function...
	I1031 00:12:59.319653  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for SSH to be available...
	I1031 00:12:59.322357  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322780  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.322819  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322938  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH client type: external
	I1031 00:12:59.322969  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa (-rw-------)
	I1031 00:12:59.323009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:59.323029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | About to run SSH command:
	I1031 00:12:59.323064  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | exit 0
	I1031 00:12:59.421581  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:59.421963  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetConfigRaw
	I1031 00:12:59.422651  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.425540  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.425916  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.425961  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.426201  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:12:59.426454  249055 machine.go:88] provisioning docker machine ...
	I1031 00:12:59.426481  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:59.426720  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.426879  249055 buildroot.go:166] provisioning hostname "default-k8s-diff-port-892233"
	I1031 00:12:59.426898  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.427067  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.429588  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.429937  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.429975  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.430208  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.430403  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430573  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430690  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.430852  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.431368  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.431386  249055 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-892233 && echo "default-k8s-diff-port-892233" | sudo tee /etc/hostname
	I1031 00:12:59.572253  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-892233
	
	I1031 00:12:59.572299  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.575534  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.575858  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.575919  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.576140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.576366  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576592  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576766  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.576919  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.577349  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.577372  249055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-892233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-892233/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-892233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:59.714987  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:59.715020  249055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:59.715079  249055 buildroot.go:174] setting up certificates
	I1031 00:12:59.715094  249055 provision.go:83] configureAuth start
	I1031 00:12:59.715115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.715440  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.718485  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.718900  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.718932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.719039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.721488  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.721844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.721874  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.722068  249055 provision.go:138] copyHostCerts
	I1031 00:12:59.722141  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:59.722155  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:59.722227  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:59.722363  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:59.722377  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:59.722402  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:59.722528  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:59.722538  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:59.722560  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:59.722619  249055 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-892233 san=[192.168.39.2 192.168.39.2 localhost 127.0.0.1 minikube default-k8s-diff-port-892233]
	I1031 00:13:00.038821  249055 provision.go:172] copyRemoteCerts
	I1031 00:13:00.038892  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:00.038924  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.042237  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042585  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.042627  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042753  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.042976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.043252  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.043410  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.130665  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:00.158853  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1031 00:13:00.188023  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:13:00.214990  249055 provision.go:86] duration metric: configureAuth took 499.878655ms
	I1031 00:13:00.215020  249055 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:00.215284  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:00.215445  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.218339  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.218821  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.218861  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.219039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.219282  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219500  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219672  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.219873  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.220371  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.220411  249055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:00.567578  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:00.567663  249055 machine.go:91] provisioned docker machine in 1.141189726s
	I1031 00:13:00.567680  249055 start.go:300] post-start starting for "default-k8s-diff-port-892233" (driver="kvm2")
	I1031 00:13:00.567695  249055 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:00.567719  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.568094  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:00.568134  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.570983  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571434  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.571478  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571649  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.571849  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.572010  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.572173  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.660300  249055 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:00.665751  249055 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:00.665779  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:00.665853  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:00.665958  249055 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:00.666046  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:00.677668  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:00.702125  249055 start.go:303] post-start completed in 134.425173ms
	I1031 00:13:00.702165  249055 fix.go:56] fixHost completed within 23.735576451s
	I1031 00:13:00.702195  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.705554  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.705976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.706029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.706319  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.706545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706722  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.707040  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.707449  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.707470  249055 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:00.818749  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711180.762641951
	
	I1031 00:13:00.818785  249055 fix.go:206] guest clock: 1698711180.762641951
	I1031 00:13:00.818797  249055 fix.go:219] Guest: 2023-10-31 00:13:00.762641951 +0000 UTC Remote: 2023-10-31 00:13:00.70217124 +0000 UTC m=+181.580385758 (delta=60.470711ms)
	I1031 00:13:00.818850  249055 fix.go:190] guest clock delta is within tolerance: 60.470711ms
	I1031 00:13:00.818861  249055 start.go:83] releasing machines lock for "default-k8s-diff-port-892233", held for 23.852333569s
	I1031 00:13:00.818897  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.819199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:00.822674  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823152  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.823194  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823436  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824107  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824336  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824543  249055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:00.824603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.824669  249055 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:00.824698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.827622  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828092  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828149  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828176  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828377  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828420  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828477  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828558  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828638  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828817  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.828926  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.829014  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.829694  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.945937  249055 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:00.951731  249055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:01.099346  249055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:01.106701  249055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:01.106789  249055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:01.122651  249055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:01.122738  249055 start.go:472] detecting cgroup driver to use...
	I1031 00:13:01.122839  249055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:01.140968  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:01.159184  249055 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:01.159267  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:01.176636  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:01.190420  249055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:01.304327  249055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:01.446312  249055 docker.go:214] disabling docker service ...
	I1031 00:13:01.446440  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:01.462043  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:01.478402  249055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:01.618099  249055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:01.745376  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:01.758262  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:01.774927  249055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:13:01.774999  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.784376  249055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:01.784441  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.793769  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.802954  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.813429  249055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:01.822730  249055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:01.832032  249055 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:01.832103  249055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:01.845005  249055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:01.855358  249055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:01.997815  249055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:02.229016  249055 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:02.229090  249055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:02.233980  249055 start.go:540] Will wait 60s for crictl version
	I1031 00:13:02.234044  249055 ssh_runner.go:195] Run: which crictl
	I1031 00:13:02.237901  249055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:02.280450  249055 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:02.280562  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.326608  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.381010  249055 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:57.879480  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.378990  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.401245  248718 api_server.go:72] duration metric: took 2.5625596s to wait for apiserver process to appear ...
	I1031 00:12:58.401294  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:58.401317  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.483261  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.483293  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:01.483309  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.586135  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.586172  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:02.086932  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.095676  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.095714  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:02.586339  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.599335  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.599376  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:03.087312  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:03.095444  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:13:03.107809  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:03.107842  248718 api_server.go:131] duration metric: took 4.706538937s to wait for apiserver health ...
	I1031 00:13:03.107855  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:13:03.107864  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:03.110057  248718 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:02.382546  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:02.386646  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387022  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:02.387068  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387291  249055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:02.393394  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:02.408630  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:13:02.408723  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:02.461303  249055 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:13:02.461388  249055 ssh_runner.go:195] Run: which lz4
	I1031 00:13:02.466160  249055 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:02.472133  249055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:02.472175  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:13:01.647436  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.653247  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.111616  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:03.142561  248718 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:03.210454  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:03.229202  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:03.229253  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:03.229269  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:03.229278  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:03.229289  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:03.229302  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:03.229321  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:03.229339  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:03.229353  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:03.229369  248718 system_pods.go:74] duration metric: took 18.888134ms to wait for pod list to return data ...
	I1031 00:13:03.229379  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:03.269761  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:03.269808  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:03.269821  248718 node_conditions.go:105] duration metric: took 40.435389ms to run NodePressure ...
	I1031 00:13:03.269843  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:03.828792  248718 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840423  248718 kubeadm.go:787] kubelet initialised
	I1031 00:13:03.840449  248718 kubeadm.go:788] duration metric: took 11.631934ms waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840461  248718 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:03.856214  248718 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.885090  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885128  248718 pod_ready.go:81] duration metric: took 28.821802ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.885141  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885169  248718 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.903365  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903468  248718 pod_ready.go:81] duration metric: took 18.286782ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.903494  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903516  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.918470  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918511  248718 pod_ready.go:81] duration metric: took 14.954407ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.918536  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918548  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.933999  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934040  248718 pod_ready.go:81] duration metric: took 15.480835ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.934057  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934068  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.237338  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237374  248718 pod_ready.go:81] duration metric: took 303.296061ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.237389  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237398  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.634179  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634222  248718 pod_ready.go:81] duration metric: took 396.814691ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.634238  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634253  248718 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.035746  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035785  248718 pod_ready.go:81] duration metric: took 401.520697ms waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:05.035801  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035816  248718 pod_ready.go:38] duration metric: took 1.195339888s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:05.035852  248718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:13:05.053467  248718 ops.go:34] apiserver oom_adj: -16
	I1031 00:13:05.053499  248718 kubeadm.go:640] restartCluster took 20.703241237s
	I1031 00:13:05.053510  248718 kubeadm.go:406] StartCluster complete in 20.760104259s
	I1031 00:13:05.053534  248718 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.053649  248718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:13:05.056586  248718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.056927  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:13:05.057035  248718 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:13:05.057123  248718 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-078843"
	I1031 00:13:05.057141  248718 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-078843"
	W1031 00:13:05.057149  248718 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:13:05.057204  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:05.057234  248718 addons.go:69] Setting default-storageclass=true in profile "embed-certs-078843"
	I1031 00:13:05.057211  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.057248  248718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-078843"
	I1031 00:13:05.057647  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057682  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057706  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057743  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057816  248718 addons.go:69] Setting metrics-server=true in profile "embed-certs-078843"
	I1031 00:13:05.057835  248718 addons.go:231] Setting addon metrics-server=true in "embed-certs-078843"
	W1031 00:13:05.057846  248718 addons.go:240] addon metrics-server should already be in state true
	I1031 00:13:05.057940  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.058407  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.058492  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.077590  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I1031 00:13:05.077948  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I1031 00:13:05.078081  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078347  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078769  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.078785  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079028  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.079054  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079408  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085132  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085145  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I1031 00:13:05.085597  248718 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-078843" context rescaled to 1 replicas
	I1031 00:13:05.085640  248718 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:13:05.088029  248718 out.go:177] * Verifying Kubernetes components...
	I1031 00:13:05.085726  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.085922  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.086067  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.089646  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:13:05.089718  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.090571  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.090592  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.091096  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.091945  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.092003  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.095067  248718 addons.go:231] Setting addon default-storageclass=true in "embed-certs-078843"
	W1031 00:13:05.095093  248718 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:13:05.095131  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.095551  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.095608  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.111102  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1031 00:13:05.111739  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.112393  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.112413  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.112797  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.112983  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.114423  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1031 00:13:05.114993  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.115615  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.115634  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.115848  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.116042  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.118503  248718 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:13:05.116288  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.120126  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:13:05.120149  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:13:05.120184  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.120637  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1031 00:13:05.121136  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.121582  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.121601  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.122054  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.122163  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.122536  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.122576  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.124417  248718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:00.852003  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Start
	I1031 00:13:00.853038  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring networks are active...
	I1031 00:13:00.853268  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network default is active
	I1031 00:13:00.853774  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network mk-old-k8s-version-225140 is active
	I1031 00:13:00.854290  248084 main.go:141] libmachine: (old-k8s-version-225140) Getting domain xml...
	I1031 00:13:00.855089  248084 main.go:141] libmachine: (old-k8s-version-225140) Creating domain...
	I1031 00:13:02.250983  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting to get IP...
	I1031 00:13:02.251883  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.252351  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.252421  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.252327  249826 retry.go:31] will retry after 242.989359ms: waiting for machine to come up
	I1031 00:13:02.497099  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.497647  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.497671  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.497581  249826 retry.go:31] will retry after 267.660992ms: waiting for machine to come up
	I1031 00:13:02.767445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.770812  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.770846  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.770757  249826 retry.go:31] will retry after 311.592507ms: waiting for machine to come up
	I1031 00:13:03.085650  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.086233  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.086262  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.086139  249826 retry.go:31] will retry after 594.222148ms: waiting for machine to come up
	I1031 00:13:03.681721  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.682255  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.682286  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.682147  249826 retry.go:31] will retry after 758.043103ms: waiting for machine to come up
	I1031 00:13:04.442274  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:04.443048  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:04.443078  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:04.442997  249826 retry.go:31] will retry after 887.518169ms: waiting for machine to come up
	I1031 00:13:05.332541  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:05.333184  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:05.333212  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:05.333129  249826 retry.go:31] will retry after 851.434462ms: waiting for machine to come up
	I1031 00:13:05.125889  248718 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.125912  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:13:05.125931  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.124466  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.126004  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.126025  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.125276  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.126198  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.126338  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.126414  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.131827  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.131844  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.131883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.131916  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.132049  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.132274  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.132420  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.144729  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I1031 00:13:05.145178  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.145775  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.145795  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.146202  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.146381  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.149644  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.150317  248718 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.150332  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:13:05.150350  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.153417  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.153915  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.153956  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.154082  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.154266  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.154606  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.154731  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.279166  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:13:05.279209  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:13:05.314989  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.318765  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.337844  248718 node_ready.go:35] waiting up to 6m0s for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:05.338209  248718 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 00:13:05.343889  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:13:05.343913  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:13:05.391973  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:05.392002  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:13:05.442745  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.503163864s)
	I1031 00:13:06.822030  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822047  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.506945748s)
	I1031 00:13:06.822097  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822123  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822539  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822568  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822594  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822620  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822641  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822654  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822665  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822689  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822702  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822711  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.823128  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823187  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823196  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.823249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823286  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823305  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.838726  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.838749  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.839036  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.839101  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.839124  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.863966  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.421170822s)
	I1031 00:13:06.864085  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864105  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.864472  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.864499  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.864511  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864520  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.865117  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.865133  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.865136  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.865144  248718 addons.go:467] Verifying addon metrics-server=true in "embed-certs-078843"
	I1031 00:13:06.868351  248718 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:13:06.869950  248718 addons.go:502] enable addons completed in 1.812918702s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:13:07.438581  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.402138  249055 crio.go:444] Took 1.936056 seconds to copy over tarball
	I1031 00:13:04.402221  249055 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:07.956805  249055 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.554540356s)
	I1031 00:13:07.956841  249055 crio.go:451] Took 3.554667 seconds to extract the tarball
	I1031 00:13:07.956854  249055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:08.017763  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:08.072921  249055 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:13:08.072982  249055 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:13:08.073063  249055 ssh_runner.go:195] Run: crio config
	I1031 00:13:08.131013  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:08.131045  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:08.131070  249055 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:08.131099  249055 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-892233 NodeName:default-k8s-diff-port-892233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:13:08.131362  249055 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-892233"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:08.131583  249055 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-892233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1031 00:13:08.131658  249055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:13:08.140884  249055 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:08.140973  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:08.149405  249055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I1031 00:13:08.166006  249055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:08.182874  249055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1031 00:13:08.200304  249055 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:08.203993  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:08.217645  249055 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233 for IP: 192.168.39.2
	I1031 00:13:08.217692  249055 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:08.217873  249055 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:08.217924  249055 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:08.218015  249055 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.key
	I1031 00:13:08.308243  249055 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key.dd3b77ed
	I1031 00:13:08.308354  249055 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key
	I1031 00:13:08.308540  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:08.308606  249055 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:08.308626  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:08.308652  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:08.308678  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:08.308701  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:08.308743  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:08.309489  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:08.339601  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:08.365873  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:08.393028  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:13:08.418983  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:08.445555  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:08.471234  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:08.496657  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:08.522698  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:08.546933  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:08.570645  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:08.596096  249055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:08.615431  249055 ssh_runner.go:195] Run: openssl version
	I1031 00:13:08.621901  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:08.633316  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638479  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638546  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.644750  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:08.656306  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:08.669978  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.675964  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.676033  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.682433  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:08.694215  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:08.706255  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713046  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713147  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.720902  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:08.732062  249055 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:08.737112  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:08.745040  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:08.753046  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:08.759410  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:08.765847  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:08.772651  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:08.779086  249055 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:08.779224  249055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:08.779292  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:08.832024  249055 cri.go:89] found id: ""
	I1031 00:13:08.832096  249055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:08.842618  249055 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:08.842641  249055 kubeadm.go:636] restartCluster start
	I1031 00:13:08.842716  249055 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:08.852209  249055 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.853480  249055 kubeconfig.go:92] found "default-k8s-diff-port-892233" server: "https://192.168.39.2:8444"
	I1031 00:13:08.855965  249055 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:08.865555  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.865617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.877258  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.877285  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.877332  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.887847  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:05.643929  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:05.643958  248387 pod_ready.go:81] duration metric: took 8.31111047s waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.643971  248387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:07.946810  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:06.186224  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:06.186916  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:06.186948  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:06.186867  249826 retry.go:31] will retry after 964.405003ms: waiting for machine to come up
	I1031 00:13:07.153455  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:07.153973  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:07.154006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:07.153917  249826 retry.go:31] will retry after 1.515980724s: waiting for machine to come up
	I1031 00:13:08.671700  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:08.672189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:08.672219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:08.672117  249826 retry.go:31] will retry after 2.254841495s: waiting for machine to come up
	I1031 00:13:09.658372  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:11.938230  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:12.439097  248718 node_ready.go:49] node "embed-certs-078843" has status "Ready":"True"
	I1031 00:13:12.439129  248718 node_ready.go:38] duration metric: took 7.101255254s waiting for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:12.439147  248718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:12.447673  248718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.469967  248718 pod_ready.go:92] pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.470002  248718 pod_ready.go:81] duration metric: took 22.292329ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.470017  248718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482061  248718 pod_ready.go:92] pod "etcd-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.482092  248718 pod_ready.go:81] duration metric: took 12.066806ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482106  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489019  248718 pod_ready.go:92] pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.489052  248718 pod_ready.go:81] duration metric: took 6.936171ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489066  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500686  248718 pod_ready.go:92] pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.500712  248718 pod_ready.go:81] duration metric: took 11.637946ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500722  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:09.388669  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.388776  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.400708  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:09.888027  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.888146  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.900678  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.388004  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.388114  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.403685  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.888198  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.888314  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.900608  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.388239  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.388363  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.404992  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.888425  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.888541  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.900436  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.388293  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.388418  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.404621  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.888037  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.888156  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.900860  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.388276  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.388371  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.400841  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.888124  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.888238  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.903041  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.168791  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:12.169662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.669047  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:10.928893  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:10.929414  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:10.929445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:10.929369  249826 retry.go:31] will retry after 2.792980456s: waiting for machine to come up
	I1031 00:13:13.724006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:13.724430  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:13.724469  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:13.724356  249826 retry.go:31] will retry after 2.555956413s: waiting for machine to come up
	I1031 00:13:12.838631  248718 pod_ready.go:92] pod "kube-proxy-287dq" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.838658  248718 pod_ready.go:81] duration metric: took 337.929955ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.838668  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239513  248718 pod_ready.go:92] pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:13.239541  248718 pod_ready.go:81] duration metric: took 400.86714ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239552  248718 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:15.546507  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.388661  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.388736  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.402388  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:14.888855  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.888965  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.903137  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.388757  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.388868  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.404412  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.888848  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.888984  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.902181  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.388790  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.388913  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.402283  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.888892  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.889035  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.900677  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.388842  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.388983  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.401399  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.888981  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.889099  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.901474  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.387997  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:18.388083  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:18.399745  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.866186  249055 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:18.866263  249055 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:18.866282  249055 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:18.866352  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:18.906125  249055 cri.go:89] found id: ""
	I1031 00:13:18.906214  249055 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:18.921555  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:18.930111  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:18.930193  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938516  249055 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938545  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:19.070700  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:17.167517  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:19.170710  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:16.282473  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:16.282944  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:16.282975  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:16.282900  249826 retry.go:31] will retry after 2.811414756s: waiting for machine to come up
	I1031 00:13:19.096338  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:19.096738  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:19.096760  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:19.096714  249826 retry.go:31] will retry after 3.844203493s: waiting for machine to come up
	I1031 00:13:17.548558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.047074  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.047691  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.139806  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069066882s)
	I1031 00:13:20.139847  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.337823  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.417915  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.499750  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:20.499831  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:20.515735  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.029420  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.529636  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.029757  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.529034  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.029479  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.055542  249055 api_server.go:72] duration metric: took 2.555800185s to wait for apiserver process to appear ...
	I1031 00:13:23.055573  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:23.055591  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:21.667545  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:24.167560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.943000  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.943492  248084 main.go:141] libmachine: (old-k8s-version-225140) Found IP for machine: 192.168.72.65
	I1031 00:13:22.943521  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserving static IP address...
	I1031 00:13:22.943540  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has current primary IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.944080  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.944120  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | skip adding static IP to network mk-old-k8s-version-225140 - found existing host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"}
	I1031 00:13:22.944139  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserved static IP address: 192.168.72.65
	I1031 00:13:22.944160  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Getting to WaitForSSH function...
	I1031 00:13:22.944168  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting for SSH to be available...
	I1031 00:13:22.946799  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.947222  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947416  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH client type: external
	I1031 00:13:22.947448  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa (-rw-------)
	I1031 00:13:22.947508  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:13:22.947534  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | About to run SSH command:
	I1031 00:13:22.947581  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | exit 0
	I1031 00:13:23.045850  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | SSH cmd err, output: <nil>: 
	I1031 00:13:23.046239  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetConfigRaw
	I1031 00:13:23.046996  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.050061  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050464  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.050496  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050789  248084 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/config.json ...
	I1031 00:13:23.051046  248084 machine.go:88] provisioning docker machine ...
	I1031 00:13:23.051070  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:23.051289  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051484  248084 buildroot.go:166] provisioning hostname "old-k8s-version-225140"
	I1031 00:13:23.051511  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051731  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.054157  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054603  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.054636  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054784  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.055085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055291  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055503  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.055718  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.056178  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.056203  248084 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-225140 && echo "old-k8s-version-225140" | sudo tee /etc/hostname
	I1031 00:13:23.184296  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-225140
	
	I1031 00:13:23.184356  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.187270  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187720  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.187761  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187895  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.188085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188228  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188340  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.188565  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.189104  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.189135  248084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-225140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-225140/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-225140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:13:23.315792  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:13:23.315829  248084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:13:23.315893  248084 buildroot.go:174] setting up certificates
	I1031 00:13:23.315906  248084 provision.go:83] configureAuth start
	I1031 00:13:23.315921  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.316224  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.319690  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320111  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.320143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320315  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.322897  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323334  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.323362  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323720  248084 provision.go:138] copyHostCerts
	I1031 00:13:23.323803  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:13:23.323820  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:13:23.323895  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:13:23.324025  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:13:23.324043  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:13:23.324080  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:13:23.324257  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:13:23.324272  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:13:23.324313  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:13:23.324415  248084 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-225140 san=[192.168.72.65 192.168.72.65 localhost 127.0.0.1 minikube old-k8s-version-225140]
	I1031 00:13:23.580836  248084 provision.go:172] copyRemoteCerts
	I1031 00:13:23.580905  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:23.580929  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.584088  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584527  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.584576  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584872  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.585115  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.585290  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.585440  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:23.680241  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1031 00:13:23.706003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:13:23.730993  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:23.760873  248084 provision.go:86] duration metric: configureAuth took 444.934236ms
	I1031 00:13:23.760909  248084 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:23.761208  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:13:23.761370  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.764798  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.765273  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765411  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.765646  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.765868  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.766036  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.766256  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.766762  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.766796  248084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:24.109914  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:24.109946  248084 machine.go:91] provisioned docker machine in 1.058882555s
	I1031 00:13:24.109958  248084 start.go:300] post-start starting for "old-k8s-version-225140" (driver="kvm2")
	I1031 00:13:24.109972  248084 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:24.109994  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.110392  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:24.110456  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.113825  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114298  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.114335  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114587  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.114814  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.114989  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.115148  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.206997  248084 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:24.211439  248084 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:24.211467  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:24.211551  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:24.211635  248084 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:24.211722  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:24.219976  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:24.246337  248084 start.go:303] post-start completed in 136.360652ms
	I1031 00:13:24.246366  248084 fix.go:56] fixHost completed within 23.427336969s
	I1031 00:13:24.246389  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.249547  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.249876  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.249919  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.250099  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.250300  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250603  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250815  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.251022  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:24.251387  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:24.251413  248084 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:24.366477  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711204.302770779
	
	I1031 00:13:24.366499  248084 fix.go:206] guest clock: 1698711204.302770779
	I1031 00:13:24.366507  248084 fix.go:219] Guest: 2023-10-31 00:13:24.302770779 +0000 UTC Remote: 2023-10-31 00:13:24.246369619 +0000 UTC m=+368.452785688 (delta=56.40116ms)
	I1031 00:13:24.366558  248084 fix.go:190] guest clock delta is within tolerance: 56.40116ms
	I1031 00:13:24.366570  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 23.547580429s
	I1031 00:13:24.366599  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.366871  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:24.369640  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.369985  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.370032  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.370155  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370695  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370910  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370996  248084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:24.371044  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.371205  248084 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:24.371233  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.373962  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374315  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374349  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374379  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374621  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.374759  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374796  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.374822  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374952  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375018  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.375140  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.375139  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.375278  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375383  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.490387  248084 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:24.497758  248084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:24.645967  248084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:24.652716  248084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:24.652795  248084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:24.668415  248084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:24.668446  248084 start.go:472] detecting cgroup driver to use...
	I1031 00:13:24.668513  248084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:24.683255  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:24.697242  248084 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:24.697295  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:24.710554  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:24.725562  248084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:24.847447  248084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:24.982382  248084 docker.go:214] disabling docker service ...
	I1031 00:13:24.982477  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:24.998270  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:25.011136  248084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:25.129421  248084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:25.258387  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:25.271528  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:25.291702  248084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1031 00:13:25.291788  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.301762  248084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:25.301826  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.311900  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.322111  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.331429  248084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:25.344907  248084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:25.354397  248084 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:25.354463  248084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:25.367335  248084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:25.376415  248084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:25.493551  248084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:25.677504  248084 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:25.677648  248084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:25.683882  248084 start.go:540] Will wait 60s for crictl version
	I1031 00:13:25.683952  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:25.687748  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:25.729230  248084 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:25.729316  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.782619  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.832400  248084 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1031 00:13:25.833898  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:25.836924  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837347  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:25.837372  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837666  248084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:25.841940  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:24.051460  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.554325  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.499116  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.499157  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:26.499172  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:26.509898  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.509929  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:27.010543  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.024054  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.024104  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:27.510303  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.518621  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.518658  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:28.010147  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:28.017834  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:13:28.027903  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:28.028005  249055 api_server.go:131] duration metric: took 4.972421145s to wait for apiserver health ...
	I1031 00:13:28.028033  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:28.028070  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:28.030427  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:28.032020  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:28.042889  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:28.084357  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:28.114368  249055 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:28.114416  249055 system_pods.go:61] "coredns-5dd5756b68-6sbs7" [4cf52749-359c-42b7-a985-d2cdc3f20700] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:28.114430  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [75c06d7d-877d-4df8-9805-0ea50aec938f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:28.114440  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [6eb1d4f8-0594-4992-962c-383062853ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:28.114460  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [8b5e8ab9-34fe-4337-95d1-554adbd23505] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:28.114470  249055 system_pods.go:61] "kube-proxy-jn2j8" [23f4d9d7-61a0-43d9-a815-a4ce10a568e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:28.114479  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [dcb7e68d-4e3d-4e46-935a-1372309ad89c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:28.114488  249055 system_pods.go:61] "metrics-server-57f55c9bc5-7klqw" [3f832e2c-81b4-431e-b1a2-987057fdae0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:28.114502  249055 system_pods.go:61] "storage-provisioner" [b912cf02-280b-47e0-8e72-fd22566a40f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:28.114515  249055 system_pods.go:74] duration metric: took 30.127265ms to wait for pod list to return data ...
	I1031 00:13:28.114534  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:28.126920  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:28.126971  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:28.127018  249055 node_conditions.go:105] duration metric: took 12.476154ms to run NodePressure ...
	I1031 00:13:28.127048  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:28.402286  249055 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407352  249055 kubeadm.go:787] kubelet initialised
	I1031 00:13:28.407384  249055 kubeadm.go:788] duration metric: took 5.069821ms waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407397  249055 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:28.413100  249055 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:26.174532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:28.667350  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:25.856078  248084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1031 00:13:25.856136  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:25.913612  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:25.913733  248084 ssh_runner.go:195] Run: which lz4
	I1031 00:13:25.918632  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:25.923981  248084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:25.924014  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1031 00:13:27.712494  248084 crio.go:444] Took 1.793896 seconds to copy over tarball
	I1031 00:13:27.712615  248084 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:29.050835  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.549536  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.457173  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.255838  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.667667  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.167250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.207204  248084 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.494544747s)
	I1031 00:13:31.207238  248084 crio.go:451] Took 3.494710 seconds to extract the tarball
	I1031 00:13:31.207250  248084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:31.253648  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:31.312599  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:31.312624  248084 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:13:31.312719  248084 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.312753  248084 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.312763  248084 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.312776  248084 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1031 00:13:31.312705  248084 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.313005  248084 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.313122  248084 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.312926  248084 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314301  248084 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314408  248084 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.314826  248084 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.314863  248084 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.314835  248084 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.314877  248084 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.314888  248084 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.314904  248084 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1031 00:13:31.492117  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.493373  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.506179  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.506237  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1031 00:13:31.510547  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.515827  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.524137  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.614442  248084 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1031 00:13:31.614494  248084 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.614544  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.622661  248084 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1031 00:13:31.622718  248084 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.622770  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.630473  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.674058  248084 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1031 00:13:31.674111  248084 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.674161  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.707251  248084 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1031 00:13:31.707293  248084 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1031 00:13:31.707337  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1031 00:13:31.719006  248084 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.719008  248084 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1031 00:13:31.719056  248084 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.719072  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719084  248084 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.719111  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719119  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.719139  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719176  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.866787  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.866815  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1031 00:13:31.866818  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.866883  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1031 00:13:31.866887  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.866936  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1031 00:13:31.867046  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.993265  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1031 00:13:31.993505  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1031 00:13:31.993999  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1031 00:13:31.994045  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1031 00:13:31.994063  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1031 00:13:31.994123  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999020  248084 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1031 00:13:31.999034  248084 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999068  248084 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1031 00:13:33.460498  248084 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461402246s)
	I1031 00:13:33.460530  248084 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1031 00:13:33.460582  248084 cache_images.go:92] LoadImages completed in 2.147945804s
	W1031 00:13:33.460661  248084 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1031 00:13:33.460749  248084 ssh_runner.go:195] Run: crio config
	I1031 00:13:33.528812  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:33.528838  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:33.528865  248084 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:33.528895  248084 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.65 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-225140 NodeName:old-k8s-version-225140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1031 00:13:33.529103  248084 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-225140"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-225140
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.65:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:33.529205  248084 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-225140 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:13:33.529276  248084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1031 00:13:33.539328  248084 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:33.539424  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:33.551543  248084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1031 00:13:33.569095  248084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:33.586561  248084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1031 00:13:33.605084  248084 ssh_runner.go:195] Run: grep 192.168.72.65	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:33.609322  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:33.623527  248084 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140 for IP: 192.168.72.65
	I1031 00:13:33.623556  248084 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:33.623768  248084 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:33.623817  248084 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:33.623919  248084 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.key
	I1031 00:13:33.624000  248084 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key.fa85241c
	I1031 00:13:33.624074  248084 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key
	I1031 00:13:33.624223  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:33.624267  248084 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:33.624285  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:33.624333  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:33.624377  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:33.624409  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:33.624480  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:33.625311  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:33.648457  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:33.673383  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:33.701679  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:13:33.725823  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:33.748912  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:33.777397  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:33.803003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:33.827749  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:33.850011  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:33.871722  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:33.894663  248084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:33.912130  248084 ssh_runner.go:195] Run: openssl version
	I1031 00:13:33.918010  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:33.928381  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933548  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933605  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.939344  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:33.950844  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:33.962585  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968178  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968244  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.975606  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:33.986565  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:33.998188  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.003940  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.004012  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.010088  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:34.022223  248084 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:34.028537  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:34.036319  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:34.043481  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:34.051269  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:34.058129  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:34.065473  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:34.072663  248084 kubeadm.go:404] StartCluster: {Name:old-k8s-version-225140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:34.072781  248084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:34.072830  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:34.121758  248084 cri.go:89] found id: ""
	I1031 00:13:34.121848  248084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:34.135357  248084 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:34.135392  248084 kubeadm.go:636] restartCluster start
	I1031 00:13:34.135469  248084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:34.145173  248084 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.146905  248084 kubeconfig.go:92] found "old-k8s-version-225140" server: "https://192.168.72.65:8443"
	I1031 00:13:34.150660  248084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:34.163037  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.163119  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.184414  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.184441  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.184586  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.197787  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.698120  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.698246  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.710874  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.198312  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.198384  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.210933  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.698108  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.698210  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.710184  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:33.551354  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.048781  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.442171  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.941322  249055 pod_ready.go:92] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:36.941344  249055 pod_ready.go:81] duration metric: took 8.528221711s waiting for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:36.941353  249055 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:38.959679  249055 pod_ready.go:102] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.168250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:37.666699  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.198699  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.198787  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.211005  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:36.698612  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.698705  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.712106  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.198674  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.198779  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.211665  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.698160  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.698258  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.709798  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.198294  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.198410  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.210400  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.697965  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.698058  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.710188  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.198306  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.198435  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.210213  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.698867  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.698944  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.709958  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.198113  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.198217  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.209265  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.698424  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.698494  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.715194  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.548167  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.047378  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:39.959598  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.959625  249055 pod_ready.go:81] duration metric: took 3.018261782s waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.959638  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965182  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.965204  249055 pod_ready.go:81] duration metric: took 5.558563ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965218  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970258  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.970283  249055 pod_ready.go:81] duration metric: took 5.058027ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970293  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975183  249055 pod_ready.go:92] pod "kube-proxy-jn2j8" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.975202  249055 pod_ready.go:81] duration metric: took 4.903272ms waiting for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975209  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137875  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:40.137907  249055 pod_ready.go:81] duration metric: took 162.69035ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137921  249055 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:42.452793  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:40.167385  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:42.666396  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.198534  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.198640  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.210412  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:41.698420  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.698526  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.710324  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.198572  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.198649  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.210399  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.697932  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.698010  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.711010  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.198096  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.198182  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.209468  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.698864  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.698998  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.710735  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:44.163493  248084 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:44.163545  248084 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:44.163560  248084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:44.163621  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:44.204352  248084 cri.go:89] found id: ""
	I1031 00:13:44.204444  248084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:44.219641  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:44.228342  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:44.228420  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237058  248084 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237081  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:44.369926  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.077715  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.306025  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.399572  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.537955  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:45.538046  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:45.554284  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:43.549424  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.052253  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:44.947118  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.954020  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:45.167622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:47.669895  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.073056  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:46.572408  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.072392  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.098617  248084 api_server.go:72] duration metric: took 1.560662194s to wait for apiserver process to appear ...
	I1031 00:13:47.098650  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:47.098673  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:48.547476  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:50.547537  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:49.446620  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:51.946346  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:53.949089  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.098997  248084 api_server.go:269] stopped: https://192.168.72.65:8443/healthz: Get "https://192.168.72.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1031 00:13:52.099073  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:52.709441  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:52.709490  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:53.210178  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.216374  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.216403  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:53.709935  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.717326  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.717361  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:54.209883  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:54.215985  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:13:54.224088  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:13:54.224115  248084 api_server.go:131] duration metric: took 7.125456227s to wait for apiserver health ...
	I1031 00:13:54.224127  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:54.224135  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:54.226152  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:50.168563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.669900  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.227723  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:54.239709  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:54.261391  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:54.273728  248084 system_pods.go:59] 7 kube-system pods found
	I1031 00:13:54.273761  248084 system_pods.go:61] "coredns-5644d7b6d9-2s6pc" [c77d23a4-28d0-4bbf-bb28-baff23fc4987] Running
	I1031 00:13:54.273775  248084 system_pods.go:61] "etcd-old-k8s-version-225140" [dcc629ce-f107-4d14-b69b-20228b00b7c5] Running
	I1031 00:13:54.273783  248084 system_pods.go:61] "kube-apiserver-old-k8s-version-225140" [38fd683e-51fa-40f0-a3c6-afdf57e14132] Running
	I1031 00:13:54.273791  248084 system_pods.go:61] "kube-controller-manager-old-k8s-version-225140" [29b1b9cb-1819-497e-b0f9-c008b0ac6e26] Running
	I1031 00:13:54.273803  248084 system_pods.go:61] "kube-proxy-fxz8t" [57ccd26e-cbcf-4ed3-adbe-778fd8bcf27c] Running
	I1031 00:13:54.273811  248084 system_pods.go:61] "kube-scheduler-old-k8s-version-225140" [d8d4d75c-25f8-4485-853c-8fa75105c6e2] Running
	I1031 00:13:54.273818  248084 system_pods.go:61] "storage-provisioner" [8fc76055-6a96-4884-8f91-b2d3f598bc88] Running
	I1031 00:13:54.273826  248084 system_pods.go:74] duration metric: took 12.417629ms to wait for pod list to return data ...
	I1031 00:13:54.273840  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:54.279056  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:54.279082  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:54.279094  248084 node_conditions.go:105] duration metric: took 5.248504ms to run NodePressure ...
	I1031 00:13:54.279111  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:54.594257  248084 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:54.600279  248084 retry.go:31] will retry after 287.663167ms: kubelet not initialised
	I1031 00:13:54.899142  248084 retry.go:31] will retry after 297.826066ms: kubelet not initialised
	I1031 00:13:55.205347  248084 retry.go:31] will retry after 797.709551ms: kubelet not initialised
	I1031 00:13:52.548142  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.548667  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.047942  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.446395  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:58.946167  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:55.167909  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.668179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:59.668339  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.009099  248084 retry.go:31] will retry after 571.448668ms: kubelet not initialised
	I1031 00:13:56.593388  248084 retry.go:31] will retry after 1.82270665s: kubelet not initialised
	I1031 00:13:58.421789  248084 retry.go:31] will retry after 1.094040234s: kubelet not initialised
	I1031 00:13:59.522021  248084 retry.go:31] will retry after 3.716569913s: kubelet not initialised
	I1031 00:13:59.549278  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.551103  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.446913  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.947203  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.668422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.668478  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.244381  248084 retry.go:31] will retry after 4.104024564s: kubelet not initialised
	I1031 00:14:04.048498  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.548070  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.447864  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.945886  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.166653  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.167008  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:07.354371  248084 retry.go:31] will retry after 9.18347873s: kubelet not initialised
	I1031 00:14:09.047421  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.048479  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.448689  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.948268  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:10.667348  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:12.667812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.052934  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.547846  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.446625  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:18.447872  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.167259  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:17.666670  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:19.667251  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.544997  248084 retry.go:31] will retry after 8.29261189s: kubelet not initialised
	I1031 00:14:17.550692  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.045758  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:22.047516  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.946805  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:23.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:21.667436  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.167210  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.843011  248084 retry.go:31] will retry after 15.309414425s: kubelet not initialised
	I1031 00:14:24.048197  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.546847  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:25.946796  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:27.950212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.167443  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.168482  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.548116  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:31.047187  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.446164  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.451487  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.666762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.667234  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:33.049216  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.545964  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:34.946961  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:36.947212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:38.949437  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.167751  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:37.668981  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:39.669233  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.157618  248084 kubeadm.go:787] kubelet initialised
	I1031 00:14:40.157647  248084 kubeadm.go:788] duration metric: took 45.563360213s waiting for restarted kubelet to initialise ...
	I1031 00:14:40.157660  248084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:14:40.163372  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169776  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.169798  248084 pod_ready.go:81] duration metric: took 6.398827ms waiting for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169806  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175023  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.175047  248084 pod_ready.go:81] duration metric: took 5.233827ms waiting for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175058  248084 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179248  248084 pod_ready.go:92] pod "etcd-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.179269  248084 pod_ready.go:81] duration metric: took 4.202967ms waiting for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179279  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183579  248084 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.183593  248084 pod_ready.go:81] duration metric: took 4.308627ms waiting for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183604  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558275  248084 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.558308  248084 pod_ready.go:81] duration metric: took 374.694908ms waiting for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558321  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:37.547289  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.047586  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:41.446752  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:43.447874  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.166207  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:44.167277  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.958069  248084 pod_ready.go:92] pod "kube-proxy-fxz8t" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.958099  248084 pod_ready.go:81] duration metric: took 399.768399ms waiting for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.958112  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358244  248084 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:41.358274  248084 pod_ready.go:81] duration metric: took 400.15381ms waiting for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358284  248084 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:43.666594  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.666948  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.547950  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.047306  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.946510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.946663  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:46.167952  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.667854  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.166448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.167022  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.547211  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:49.548100  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.548509  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.446801  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.447233  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.168676  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.667170  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.666608  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.667583  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.550528  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:56.050177  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.947677  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.447082  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:55.669616  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.170640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.165612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.168165  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.548441  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.047296  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.447626  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.947292  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:00.669772  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.665706  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.166609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.546708  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.547092  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.447672  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.449541  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:08.948333  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.667422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.669173  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.666325  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.165998  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.547133  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.547568  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.551676  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.946673  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:10.168209  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:12.666973  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.668147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.166824  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.665410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.046068  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.047803  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:15.946975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.445704  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:17.167480  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:19.668157  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.165876  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.166620  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.666455  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.549666  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:21.046823  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.447212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.947109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.167144  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.168041  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.667076  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.167164  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:23.047419  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.049728  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.947312  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.449246  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:26.669861  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.168519  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.666465  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.166123  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.547889  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.046604  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.048045  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.948497  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.446948  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:31.670479  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.167604  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.668009  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:35.165749  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.547533  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.048031  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.945337  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.947811  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.168180  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:38.170343  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.168053  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.665709  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.552108  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.047262  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.451699  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.946296  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:40.667428  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.668235  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.666624  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.166672  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.047729  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.549442  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.447109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.448250  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:48.947017  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:45.167138  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:47.666886  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.667907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.669428  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.166194  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.047526  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.049047  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:50.947410  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.446734  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:52.167771  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:54.167875  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.666228  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.667295  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.052036  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.547767  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.946776  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.446825  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.668562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:59.168110  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.167716  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.665487  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.668666  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.047770  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.047908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.048356  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.946590  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.947001  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:01.667160  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.167375  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:03.165171  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.166289  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.049788  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.547020  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.446511  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.449772  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.667622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:08.667665  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.166410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.166536  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.049966  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.547967  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.947975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:12.447789  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.168645  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667838  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.665962  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667117  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:15.667752  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.047716  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.048052  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.947264  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.947386  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.167045  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.668483  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:17.669275  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.167079  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.548369  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.548635  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:19.448662  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.947615  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.167164  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.167506  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:22.666820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.166614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.046392  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.548954  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:24.446814  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:26.945792  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:28.947133  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.167732  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.168662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.171362  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.169221  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.667206  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.550807  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:30.048391  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.448249  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.946336  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.667185  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.667628  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.165207  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:34.166237  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.546558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.046558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:37.047654  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.946896  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.449959  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.668366  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.168509  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:36.166529  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.666448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:39.552154  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.046335  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.946962  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.446383  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.666758  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.668031  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:41.168643  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.170216  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.666959  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:44.046908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:46.548312  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.947573  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.947914  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.166562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667578  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667903  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.166574  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.046763  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:51.047566  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.948510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.446760  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.168646  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.667122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.668132  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.168815  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.667713  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:53.546751  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:56.048217  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.947315  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.447727  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.169330  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.666819  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.166002  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.168109  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:58.548212  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.047033  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.448330  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.946970  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.667755  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.666457  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167186  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:03.546842  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.547488  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.445743  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:06.446624  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.451015  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.644115  248387 pod_ready.go:81] duration metric: took 4m0.000125657s waiting for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:05.644148  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:05.644168  248387 pod_ready.go:38] duration metric: took 4m9.241022532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:05.644198  248387 kubeadm.go:640] restartCluster took 4m28.058055798s
	W1031 00:17:05.644570  248387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:05.644685  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:06.168910  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.666612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.047998  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.547186  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.946940  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.455539  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:11.168678  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.667122  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.046682  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.240656  248718 pod_ready.go:81] duration metric: took 4m0.001083426s waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:13.240702  248718 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:13.240712  248718 pod_ready.go:38] duration metric: took 4m0.801552437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:13.240732  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:17:13.240766  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:13.240930  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:13.307072  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.307099  248718 cri.go:89] found id: ""
	I1031 00:17:13.307108  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:13.307180  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.312997  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:13.313067  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:13.364439  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:13.364474  248718 cri.go:89] found id: ""
	I1031 00:17:13.364485  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:13.364561  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.370120  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:13.370186  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:13.413937  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.413972  248718 cri.go:89] found id: ""
	I1031 00:17:13.413983  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:13.414051  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.420586  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:13.420669  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:13.476980  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:13.477008  248718 cri.go:89] found id: ""
	I1031 00:17:13.477028  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:13.477100  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.482874  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:13.482957  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:13.532196  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.532232  248718 cri.go:89] found id: ""
	I1031 00:17:13.532244  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:13.532314  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.539868  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:13.540017  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:13.595189  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:13.595218  248718 cri.go:89] found id: ""
	I1031 00:17:13.595231  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:13.595305  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.601429  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:13.601496  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:13.641957  248718 cri.go:89] found id: ""
	I1031 00:17:13.641984  248718 logs.go:284] 0 containers: []
	W1031 00:17:13.641992  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:13.641998  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:13.642053  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:13.683163  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.683193  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:13.683200  248718 cri.go:89] found id: ""
	I1031 00:17:13.683209  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:13.683266  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.689222  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.693814  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:13.693839  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:13.710167  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:13.710188  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.754241  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:13.754273  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.800473  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:13.800508  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.857072  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:13.857101  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.901072  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:13.901102  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:14.390850  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:14.390894  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:14.446107  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:14.446141  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:14.495337  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:14.495368  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:14.535558  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:14.535591  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:14.589637  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:14.589676  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:14.650509  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:14.650559  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:14.816331  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:14.816362  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.363336  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:17:17.378105  248718 api_server.go:72] duration metric: took 4m12.292425365s to wait for apiserver process to appear ...
	I1031 00:17:17.378131  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:17:17.378171  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:17.378234  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:17.424054  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:17.424082  248718 cri.go:89] found id: ""
	I1031 00:17:17.424091  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:17.424152  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.428185  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:17.428246  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:17.465132  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:17.465157  248718 cri.go:89] found id: ""
	I1031 00:17:17.465167  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:17.465219  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.469315  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:17.469392  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:17.504119  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:17.504140  248718 cri.go:89] found id: ""
	I1031 00:17:17.504151  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:17.504199  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:15.946464  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:17.949398  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:19.822838  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.178119551s)
	I1031 00:17:19.822927  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:19.838182  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:19.847738  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:19.857883  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:19.857939  248387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:19.911372  248387 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:19.911432  248387 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:20.091412  248387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:20.091582  248387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:20.091703  248387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:20.351519  248387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:16.166533  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:18.668258  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:20.353310  248387 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:20.353500  248387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:20.353598  248387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:20.353712  248387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:20.353809  248387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:20.353933  248387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:20.354050  248387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:20.354132  248387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:20.354241  248387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:20.354353  248387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:20.354596  248387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:20.355193  248387 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:20.355332  248387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:21.009329  248387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:21.145431  248387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:21.231013  248387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:21.384423  248387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:21.385066  248387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:21.387895  248387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:17.508240  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:17.510213  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:17.548666  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:17.548692  248718 cri.go:89] found id: ""
	I1031 00:17:17.548702  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:17.548768  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.552963  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:17.553029  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:17.593690  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:17.593728  248718 cri.go:89] found id: ""
	I1031 00:17:17.593739  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:17.593808  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.598269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:17.598325  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:17.637723  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:17.637750  248718 cri.go:89] found id: ""
	I1031 00:17:17.637761  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:17.637826  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.642006  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:17.642055  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:17.686659  248718 cri.go:89] found id: ""
	I1031 00:17:17.686687  248718 logs.go:284] 0 containers: []
	W1031 00:17:17.686695  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:17.686701  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:17.686766  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:17.732114  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:17.732147  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.732154  248718 cri.go:89] found id: ""
	I1031 00:17:17.732163  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:17.732232  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.737308  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.741981  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:17.742013  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:18.181024  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:18.181062  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:18.196483  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:18.196519  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:18.235422  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:18.235458  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:18.291366  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:18.291402  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:18.412906  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:18.412960  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:18.469631  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:18.469669  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:18.523997  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:18.524034  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:18.566490  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:18.566520  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:18.626106  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:18.626138  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:18.666341  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:18.666382  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:18.729380  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:18.729430  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:18.788148  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:18.788182  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.330782  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:17:21.338085  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:17:21.339623  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:17:21.339671  248718 api_server.go:131] duration metric: took 3.961531332s to wait for apiserver health ...
	I1031 00:17:21.339684  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:17:21.339718  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:21.339786  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:21.380659  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:21.380687  248718 cri.go:89] found id: ""
	I1031 00:17:21.380696  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:21.380760  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.385559  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:21.385626  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:21.431810  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:21.431841  248718 cri.go:89] found id: ""
	I1031 00:17:21.431851  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:21.431914  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.436489  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:21.436562  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:21.489003  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.489036  248718 cri.go:89] found id: ""
	I1031 00:17:21.489047  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:21.489109  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.493691  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:21.493765  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:21.533480  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:21.533507  248718 cri.go:89] found id: ""
	I1031 00:17:21.533518  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:21.533584  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.538269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:21.538358  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:21.589588  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:21.589621  248718 cri.go:89] found id: ""
	I1031 00:17:21.589632  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:21.589705  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.595927  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:21.596020  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:21.644705  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:21.644730  248718 cri.go:89] found id: ""
	I1031 00:17:21.644738  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:21.644797  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.649696  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:21.649762  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:21.696655  248718 cri.go:89] found id: ""
	I1031 00:17:21.696692  248718 logs.go:284] 0 containers: []
	W1031 00:17:21.696703  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:21.696711  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:21.696788  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:21.743499  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.743523  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:21.743528  248718 cri.go:89] found id: ""
	I1031 00:17:21.743535  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:21.743586  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.748625  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.753187  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:21.753223  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:21.768074  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:21.768115  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:21.913742  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:21.913782  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.966345  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:21.966394  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:22.004823  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:22.004857  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:22.059117  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:22.059147  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:22.117615  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:22.117655  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:22.160231  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:22.160275  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:20.445730  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:22.447412  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:21.390006  248387 out.go:204]   - Booting up control plane ...
	I1031 00:17:21.390170  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:21.390275  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:21.391130  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:21.408062  248387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:21.409190  248387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:21.409256  248387 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:21.565150  248387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:22.536881  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:22.536920  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:22.591993  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:22.592030  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:22.644262  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:22.644302  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:22.688848  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:22.688880  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:22.740390  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:22.740440  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:25.317640  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:17:25.317675  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.317682  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.317690  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.317696  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.317702  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.317709  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.317718  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.317728  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.317737  248718 system_pods.go:74] duration metric: took 3.978040466s to wait for pod list to return data ...
	I1031 00:17:25.317752  248718 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:17:25.320120  248718 default_sa.go:45] found service account: "default"
	I1031 00:17:25.320147  248718 default_sa.go:55] duration metric: took 2.387709ms for default service account to be created ...
	I1031 00:17:25.320156  248718 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:17:25.325979  248718 system_pods.go:86] 8 kube-system pods found
	I1031 00:17:25.326004  248718 system_pods.go:89] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.326009  248718 system_pods.go:89] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.326014  248718 system_pods.go:89] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.326018  248718 system_pods.go:89] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.326022  248718 system_pods.go:89] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.326025  248718 system_pods.go:89] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.326055  248718 system_pods.go:89] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.326079  248718 system_pods.go:89] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.326088  248718 system_pods.go:126] duration metric: took 5.92719ms to wait for k8s-apps to be running ...
	I1031 00:17:25.326097  248718 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:17:25.326148  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:25.342753  248718 system_svc.go:56] duration metric: took 16.646026ms WaitForService to wait for kubelet.
	I1031 00:17:25.342775  248718 kubeadm.go:581] duration metric: took 4m20.257105243s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:17:25.342793  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:17:25.348257  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:17:25.348315  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:17:25.348379  248718 node_conditions.go:105] duration metric: took 5.579398ms to run NodePressure ...
	I1031 00:17:25.348413  248718 start.go:228] waiting for startup goroutines ...
	I1031 00:17:25.348426  248718 start.go:233] waiting for cluster config update ...
	I1031 00:17:25.348440  248718 start.go:242] writing updated cluster config ...
	I1031 00:17:25.349022  248718 ssh_runner.go:195] Run: rm -f paused
	I1031 00:17:25.415112  248718 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:17:25.418179  248718 out.go:177] * Done! kubectl is now configured to use "embed-certs-078843" cluster and "default" namespace by default
	I1031 00:17:21.166338  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:23.666609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:24.447530  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:26.947352  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:29.570822  248387 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004974 seconds
	I1031 00:17:29.570964  248387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:17:29.587033  248387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:17:30.119470  248387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:17:30.119696  248387 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-640155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:17:30.635312  248387 kubeadm.go:322] [bootstrap-token] Using token: cwaa4b.bqwxrocs0j7ngn44
	I1031 00:17:26.166271  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:28.664576  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.664963  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.636717  248387 out.go:204]   - Configuring RBAC rules ...
	I1031 00:17:30.636873  248387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:17:30.642895  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:17:30.651729  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:17:30.655472  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:17:30.659228  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:17:30.668748  248387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:17:30.690255  248387 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:17:30.950445  248387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:17:31.051453  248387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:17:31.051475  248387 kubeadm.go:322] 
	I1031 00:17:31.051536  248387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:17:31.051583  248387 kubeadm.go:322] 
	I1031 00:17:31.051709  248387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:17:31.051728  248387 kubeadm.go:322] 
	I1031 00:17:31.051767  248387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:17:31.051843  248387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:17:31.051930  248387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:17:31.051943  248387 kubeadm.go:322] 
	I1031 00:17:31.052013  248387 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:17:31.052024  248387 kubeadm.go:322] 
	I1031 00:17:31.052104  248387 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:17:31.052130  248387 kubeadm.go:322] 
	I1031 00:17:31.052191  248387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:17:31.052280  248387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:17:31.052375  248387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:17:31.052383  248387 kubeadm.go:322] 
	I1031 00:17:31.052485  248387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:17:31.052578  248387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:17:31.052612  248387 kubeadm.go:322] 
	I1031 00:17:31.052744  248387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.052900  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:17:31.052957  248387 kubeadm.go:322] 	--control-plane 
	I1031 00:17:31.052969  248387 kubeadm.go:322] 
	I1031 00:17:31.053092  248387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:17:31.053107  248387 kubeadm.go:322] 
	I1031 00:17:31.053217  248387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.053359  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:17:31.053517  248387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:17:31.053540  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:17:31.053552  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:17:31.055477  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:17:29.447694  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.449117  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:33.947759  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.056845  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:17:31.095104  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:17:31.131198  248387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:17:31.131322  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.131337  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=no-preload-640155 minikube.k8s.io/updated_at=2023_10_31T00_17_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.581951  248387 ops.go:34] apiserver oom_adj: -16
	I1031 00:17:31.582010  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.741330  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.350182  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.850643  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.350205  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.850216  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.349583  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.666281  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.168579  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:36.449644  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:38.946898  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.350661  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:35.850301  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.349673  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.849749  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.349755  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.850628  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.350204  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.849697  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.350194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.850027  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.667083  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.166305  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.349747  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:40.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.350476  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.850214  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.350555  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.850295  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.350645  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.679529  248387 kubeadm.go:1081] duration metric: took 12.548274555s to wait for elevateKubeSystemPrivileges.
	I1031 00:17:43.679561  248387 kubeadm.go:406] StartCluster complete in 5m6.156207823s
	I1031 00:17:43.679585  248387 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.679674  248387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:17:43.682045  248387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.684483  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:17:43.684785  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:17:43.684856  248387 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:17:43.684927  248387 addons.go:69] Setting storage-provisioner=true in profile "no-preload-640155"
	I1031 00:17:43.685036  248387 addons.go:231] Setting addon storage-provisioner=true in "no-preload-640155"
	W1031 00:17:43.685063  248387 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:17:43.685159  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685323  248387 addons.go:69] Setting metrics-server=true in profile "no-preload-640155"
	I1031 00:17:43.685339  248387 addons.go:231] Setting addon metrics-server=true in "no-preload-640155"
	W1031 00:17:43.685356  248387 addons.go:240] addon metrics-server should already be in state true
	I1031 00:17:43.685395  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685653  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685706  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.685893  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685978  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.686168  248387 addons.go:69] Setting default-storageclass=true in profile "no-preload-640155"
	I1031 00:17:43.686191  248387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-640155"
	I1031 00:17:43.686545  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.686651  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.705002  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1031 00:17:43.705181  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1031 00:17:43.705556  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706410  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706515  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.706543  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.706893  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I1031 00:17:43.706968  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.707139  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.707141  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.707157  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.707503  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.708166  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.708183  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.708236  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.708752  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.708783  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.709044  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.709715  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.709762  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.711511  248387 addons.go:231] Setting addon default-storageclass=true in "no-preload-640155"
	W1031 00:17:43.711525  248387 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:17:43.711553  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.711887  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.711927  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.730687  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1031 00:17:43.731513  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.732184  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.732205  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.732737  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.733201  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.734567  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I1031 00:17:43.734708  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I1031 00:17:43.735166  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.735665  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.735687  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.736245  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.736325  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.736490  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.736559  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.737461  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.739478  248387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:17:43.737480  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.738913  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.741138  248387 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.741154  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:17:43.741176  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.742564  248387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:17:43.741663  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.744300  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:17:43.744312  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:17:43.744326  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.744413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.745065  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.745106  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.753076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753082  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753110  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753196  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753200  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753235  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753249  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753282  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753376  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753469  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753527  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753624  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.753739  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.770481  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I1031 00:17:43.770925  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.773191  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.773223  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.773636  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.773840  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.775633  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.775954  248387 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:43.775969  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:17:43.775988  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.778552  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.778797  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.778823  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.779021  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.779204  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.779386  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.779683  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.936171  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.958064  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:17:43.958098  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:17:43.967116  248387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-640155" context rescaled to 1 replicas
	I1031 00:17:43.967170  248387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:17:43.969408  248387 out.go:177] * Verifying Kubernetes components...
	I1031 00:17:40.138062  249055 pod_ready.go:81] duration metric: took 4m0.000119587s waiting for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:40.138098  249055 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:40.138122  249055 pod_ready.go:38] duration metric: took 4m11.730710605s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:40.138164  249055 kubeadm.go:640] restartCluster took 4m31.295508075s
	W1031 00:17:40.138262  249055 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:40.138297  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:43.970897  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:43.997796  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:44.038710  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:17:44.038738  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:17:44.075299  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:44.075333  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:17:44.084795  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:17:44.172770  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:42.670020  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:45.165914  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:46.365906  248387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.39492875s)
	I1031 00:17:46.365968  248387 node_ready.go:35] waiting up to 6m0s for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.365998  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.368158747s)
	I1031 00:17:46.366066  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366074  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.281185782s)
	I1031 00:17:46.366103  248387 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1031 00:17:46.366086  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366354  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.430149836s)
	I1031 00:17:46.366390  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366402  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366600  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366612  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366622  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366631  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366682  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.366732  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366742  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366751  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366761  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.368921  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.368922  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.368958  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.369248  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.369293  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.369307  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.375988  248387 node_ready.go:49] node "no-preload-640155" has status "Ready":"True"
	I1031 00:17:46.376021  248387 node_ready.go:38] duration metric: took 10.036603ms waiting for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.376036  248387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:46.401563  248387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:46.425939  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.253121961s)
	I1031 00:17:46.426019  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.426035  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427461  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427471  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427488  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427498  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.427508  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427894  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427943  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427954  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427971  248387 addons.go:467] Verifying addon metrics-server=true in "no-preload-640155"
	I1031 00:17:46.436605  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.436630  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.436927  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.436959  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.436987  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.438529  248387 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1031 00:17:46.439869  248387 addons.go:502] enable addons completed in 2.755015847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1031 00:17:48.527903  248387 pod_ready.go:92] pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.527939  248387 pod_ready.go:81] duration metric: took 2.126335033s waiting for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.527954  248387 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544043  248387 pod_ready.go:92] pod "etcd-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.544070  248387 pod_ready.go:81] duration metric: took 16.106665ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544085  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552043  248387 pod_ready.go:92] pod "kube-apiserver-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.552075  248387 pod_ready.go:81] duration metric: took 7.981099ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552092  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563073  248387 pod_ready.go:92] pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.563112  248387 pod_ready.go:81] duration metric: took 11.009619ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563128  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771051  248387 pod_ready.go:92] pod "kube-proxy-pkjsl" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.771080  248387 pod_ready.go:81] duration metric: took 207.944354ms waiting for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771090  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170323  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:49.170354  248387 pod_ready.go:81] duration metric: took 399.25516ms waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170369  248387 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:47.166417  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:49.665614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:51.479213  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.979583  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:54.802281  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.663950968s)
	I1031 00:17:54.802401  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:54.818228  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:54.829802  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:54.841203  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:54.841254  249055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:54.900359  249055 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:54.900453  249055 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:55.068403  249055 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:55.068563  249055 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:55.068676  249055 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:55.316737  249055 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:51.665839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.666626  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:55.319016  249055 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:55.319172  249055 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:55.319275  249055 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:55.319395  249055 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:55.319481  249055 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:55.319603  249055 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:55.320419  249055 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:55.320814  249055 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:55.321700  249055 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:55.322211  249055 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:55.322708  249055 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:55.323252  249055 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:55.323344  249055 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:55.388450  249055 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:55.461692  249055 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:55.807861  249055 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:55.963028  249055 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:55.963510  249055 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:55.966001  249055 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:55.967951  249055 out.go:204]   - Booting up control plane ...
	I1031 00:17:55.968125  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:55.968238  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:55.968343  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:55.989357  249055 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:55.990439  249055 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:55.990548  249055 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:56.126548  249055 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:56.479126  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.479232  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:56.166722  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.667319  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:00.980893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.481571  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:04.629984  249055 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502923 seconds
	I1031 00:18:04.630137  249055 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:04.643529  249055 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:05.178336  249055 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:05.178549  249055 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-892233 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:18:05.695447  249055 kubeadm.go:322] [bootstrap-token] Using token: g00nr2.87o2mnv2u0jwf81d
	I1031 00:18:01.165232  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.166303  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.664899  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.696918  249055 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:05.697075  249055 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:05.706237  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:18:05.720767  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:05.731239  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:05.736130  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:05.740949  249055 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:05.759998  249055 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:18:06.051798  249055 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:06.118986  249055 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:06.119014  249055 kubeadm.go:322] 
	I1031 00:18:06.119078  249055 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:06.119084  249055 kubeadm.go:322] 
	I1031 00:18:06.119179  249055 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:06.119190  249055 kubeadm.go:322] 
	I1031 00:18:06.119225  249055 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:06.119282  249055 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:06.119326  249055 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:06.119332  249055 kubeadm.go:322] 
	I1031 00:18:06.119376  249055 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:18:06.119382  249055 kubeadm.go:322] 
	I1031 00:18:06.119424  249055 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:18:06.119435  249055 kubeadm.go:322] 
	I1031 00:18:06.119484  249055 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:06.119551  249055 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:06.119677  249055 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:06.119703  249055 kubeadm.go:322] 
	I1031 00:18:06.119830  249055 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:18:06.119938  249055 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:06.119957  249055 kubeadm.go:322] 
	I1031 00:18:06.120024  249055 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120179  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:06.120208  249055 kubeadm.go:322] 	--control-plane 
	I1031 00:18:06.120219  249055 kubeadm.go:322] 
	I1031 00:18:06.120330  249055 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:06.120368  249055 kubeadm.go:322] 
	I1031 00:18:06.120468  249055 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120559  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:06.121091  249055 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:06.121119  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:18:06.121127  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:06.123073  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:06.124566  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:06.140064  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:06.171195  249055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:06.171343  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.171359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=default-k8s-diff-port-892233 minikube.k8s.io/updated_at=2023_10_31T00_18_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.256957  249055 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:06.637700  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.769942  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.383359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.883621  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.384017  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.883751  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:05.979125  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.979280  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.981296  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.666495  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:10.165765  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.383896  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:09.883523  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.384077  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.883546  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.383417  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.883493  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.384043  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.884000  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.383479  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.884100  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.479614  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.978890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:12.666054  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:15.163419  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.384001  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:14.884297  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.383607  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.883617  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.383591  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.884141  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.384112  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.884196  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.384156  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.883687  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:19.114222  249055 kubeadm.go:1081] duration metric: took 12.942949327s to wait for elevateKubeSystemPrivileges.
	I1031 00:18:19.114261  249055 kubeadm.go:406] StartCluster complete in 5m10.335188993s
	I1031 00:18:19.114295  249055 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.114401  249055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:18:19.116632  249055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.116971  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:18:19.117107  249055 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:18:19.117188  249055 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117202  249055 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117221  249055 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117231  249055 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:19.117239  249055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-892233"
	W1031 00:18:19.117243  249055 addons.go:240] addon metrics-server should already be in state true
	I1031 00:18:19.117265  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:18:19.117305  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117213  249055 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.117326  249055 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:18:19.117372  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117740  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117746  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117761  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117830  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.134384  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I1031 00:18:19.134426  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I1031 00:18:19.134810  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.134915  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.135437  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135461  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.135648  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135675  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.136018  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136074  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136578  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.136625  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.137167  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.137198  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.144184  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I1031 00:18:19.144763  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.145263  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.145293  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.145648  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.145852  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.152132  249055 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.152194  249055 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:18:19.152240  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.152775  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.152867  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.154334  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I1031 00:18:19.155862  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1031 00:18:19.157267  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.158677  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.158735  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.158863  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.164983  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.165014  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.165044  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166267  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166284  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.169122  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.169199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.174627  249055 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:18:19.170934  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.176219  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:18:19.177591  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:18:19.177619  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.179052  249055 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:18:19.176693  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45785
	I1031 00:18:19.178184  249055 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-892233" context rescaled to 1 replicas
	I1031 00:18:19.179171  249055 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:18:19.181526  249055 out.go:177] * Verifying Kubernetes components...
	I1031 00:18:19.182930  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:16.980163  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:18.981179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:17.165555  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.174245  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.181603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.184667  249055 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.184676  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.184683  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:18:19.184698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.179546  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.184702  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.182398  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.184914  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.185097  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.185743  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.185761  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.185827  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.186516  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.187946  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.187988  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.188014  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.188359  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.188374  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.188549  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.188757  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.189003  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.189160  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.203564  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1031 00:18:19.203935  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.204374  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.204399  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.204741  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.204994  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.207012  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.207266  249055 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.207283  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:18:19.207302  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.209950  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210314  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.210332  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210507  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.210701  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.210830  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.210962  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.423829  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:18:19.423852  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:18:19.440581  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.466961  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.511517  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:18:19.511543  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:18:19.591560  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.591588  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:18:19.628414  249055 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.628560  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:18:19.648329  249055 node_ready.go:49] node "default-k8s-diff-port-892233" has status "Ready":"True"
	I1031 00:18:19.648353  249055 node_ready.go:38] duration metric: took 19.904402ms waiting for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.648364  249055 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:19.658333  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.692147  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.904902  249055 pod_ready.go:102] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:22.104924  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.637923019s)
	I1031 00:18:22.104999  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.104997  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.664373813s)
	I1031 00:18:22.105008  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476413511s)
	I1031 00:18:22.105035  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105013  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105052  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105035  249055 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 00:18:22.105350  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105366  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105376  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105388  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105479  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Closing plugin on server side
	I1031 00:18:22.105541  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105554  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105573  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105594  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105821  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105852  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105860  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105870  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.146205  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.146231  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.146598  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.146631  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.219948  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.561551335s)
	I1031 00:18:22.220017  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220033  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220412  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220441  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220459  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220474  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220820  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220840  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220853  249055 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:22.222793  249055 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:18:22.224194  249055 addons.go:502] enable addons completed in 3.107083845s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:18:22.880805  249055 pod_ready.go:92] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:22.880840  249055 pod_ready.go:81] duration metric: took 3.18866819s waiting for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:22.880853  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912036  249055 pod_ready.go:92] pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.912066  249055 pod_ready.go:81] duration metric: took 1.031204489s waiting for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912079  249055 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918589  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.918609  249055 pod_ready.go:81] duration metric: took 6.523247ms waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918619  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925040  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.925059  249055 pod_ready.go:81] duration metric: took 6.434141ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925067  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073002  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.073029  249055 pod_ready.go:81] duration metric: took 147.953037ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073044  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.478451  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.479849  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:24.473158  249055 pod_ready.go:92] pod "kube-proxy-77gzz" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.473184  249055 pod_ready.go:81] duration metric: took 400.13282ms waiting for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.473194  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873506  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.873528  249055 pod_ready.go:81] duration metric: took 400.328112ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873538  249055 pod_ready.go:38] duration metric: took 5.225163782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:24.873558  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:18:24.873617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:18:24.890474  249055 api_server.go:72] duration metric: took 5.711236569s to wait for apiserver process to appear ...
	I1031 00:18:24.890508  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:18:24.890533  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:18:24.896826  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:18:24.898203  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:18:24.898226  249055 api_server.go:131] duration metric: took 7.708512ms to wait for apiserver health ...
	I1031 00:18:24.898234  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:18:25.076806  249055 system_pods.go:59] 9 kube-system pods found
	I1031 00:18:25.076835  249055 system_pods.go:61] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.076840  249055 system_pods.go:61] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.076845  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.076850  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.076854  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.076857  249055 system_pods.go:61] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.076861  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.076868  249055 system_pods.go:61] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.076874  249055 system_pods.go:61] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.076882  249055 system_pods.go:74] duration metric: took 178.64211ms to wait for pod list to return data ...
	I1031 00:18:25.076889  249055 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:18:25.272531  249055 default_sa.go:45] found service account: "default"
	I1031 00:18:25.272557  249055 default_sa.go:55] duration metric: took 195.662215ms for default service account to be created ...
	I1031 00:18:25.272567  249055 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:18:25.477225  249055 system_pods.go:86] 9 kube-system pods found
	I1031 00:18:25.477258  249055 system_pods.go:89] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.477266  249055 system_pods.go:89] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.477275  249055 system_pods.go:89] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.477282  249055 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.477292  249055 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.477298  249055 system_pods.go:89] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.477309  249055 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.477323  249055 system_pods.go:89] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.477333  249055 system_pods.go:89] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.477343  249055 system_pods.go:126] duration metric: took 204.769317ms to wait for k8s-apps to be running ...
	I1031 00:18:25.477356  249055 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:18:25.477416  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:25.494054  249055 system_svc.go:56] duration metric: took 16.688482ms WaitForService to wait for kubelet.
	I1031 00:18:25.494079  249055 kubeadm.go:581] duration metric: took 6.314858374s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:18:25.494097  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:18:25.673698  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:18:25.673729  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:18:25.673742  249055 node_conditions.go:105] duration metric: took 179.63938ms to run NodePressure ...
	I1031 00:18:25.673756  249055 start.go:228] waiting for startup goroutines ...
	I1031 00:18:25.673764  249055 start.go:233] waiting for cluster config update ...
	I1031 00:18:25.673778  249055 start.go:242] writing updated cluster config ...
	I1031 00:18:25.674107  249055 ssh_runner.go:195] Run: rm -f paused
	I1031 00:18:25.729477  249055 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:18:25.731433  249055 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-892233" cluster and "default" namespace by default
	I1031 00:18:21.666578  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.667065  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:25.980194  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:27.983361  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:26.166839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:28.664820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.665038  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.478938  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:32.980862  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:33.164907  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.165601  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.479491  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.979837  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.167604  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.665586  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.982368  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:44.476905  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.359122  248084 pod_ready.go:81] duration metric: took 4m0.000818862s waiting for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	E1031 00:18:41.359173  248084 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:18:41.359193  248084 pod_ready.go:38] duration metric: took 4m1.201522433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:41.359227  248084 kubeadm.go:640] restartCluster took 5m7.223824608s
	W1031 00:18:41.359305  248084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:18:41.359335  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:18:46.480820  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:48.487440  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:46.413914  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.054544075s)
	I1031 00:18:46.414001  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:46.427362  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:18:46.436557  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:18:46.444929  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:18:46.445010  248084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1031 00:18:46.659252  248084 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:50.978966  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:52.980133  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.061122  248084 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1031 00:18:59.061211  248084 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:18:59.061324  248084 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:18:59.061476  248084 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:18:59.061695  248084 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:18:59.061861  248084 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:18:59.061989  248084 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:18:59.062059  248084 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1031 00:18:59.062158  248084 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:18:59.063991  248084 out.go:204]   - Generating certificates and keys ...
	I1031 00:18:59.064091  248084 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:18:59.064178  248084 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:18:59.064261  248084 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:18:59.064320  248084 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:18:59.064400  248084 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:18:59.064478  248084 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:18:59.064590  248084 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:18:59.064687  248084 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:18:59.064777  248084 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:18:59.064884  248084 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:18:59.064967  248084 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:18:59.065056  248084 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:18:59.065123  248084 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:18:59.065199  248084 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:18:59.065284  248084 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:18:59.065375  248084 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:18:59.065483  248084 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:18:59.067362  248084 out.go:204]   - Booting up control plane ...
	I1031 00:18:59.067477  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:18:59.067584  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:18:59.067655  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:18:59.067761  248084 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:18:59.067952  248084 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:18:59.068089  248084 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004306 seconds
	I1031 00:18:59.068174  248084 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:59.068330  248084 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:59.068419  248084 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:59.068536  248084 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-225140 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1031 00:18:59.068585  248084 kubeadm.go:322] [bootstrap-token] Using token: 1g4jse.zc5opkcf3va44z15
	I1031 00:18:59.070040  248084 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:59.070142  248084 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:59.070305  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:59.070451  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:59.070569  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:59.070657  248084 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:59.070700  248084 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:59.070742  248084 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:59.070748  248084 kubeadm.go:322] 
	I1031 00:18:59.070799  248084 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:59.070809  248084 kubeadm.go:322] 
	I1031 00:18:59.070900  248084 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:59.070912  248084 kubeadm.go:322] 
	I1031 00:18:59.070933  248084 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:59.070983  248084 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:59.071030  248084 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:59.071035  248084 kubeadm.go:322] 
	I1031 00:18:59.071082  248084 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:59.071158  248084 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:59.071269  248084 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:59.071278  248084 kubeadm.go:322] 
	I1031 00:18:59.071392  248084 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1031 00:18:59.071498  248084 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:59.071509  248084 kubeadm.go:322] 
	I1031 00:18:59.071608  248084 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.071749  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:59.071783  248084 kubeadm.go:322]     --control-plane 	  
	I1031 00:18:59.071793  248084 kubeadm.go:322] 
	I1031 00:18:59.071899  248084 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:59.071912  248084 kubeadm.go:322] 
	I1031 00:18:59.072051  248084 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.072196  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:59.072228  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:18:59.072243  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:59.073949  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:55.479295  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:57.983131  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.075900  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:59.087288  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:59.112130  248084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:59.112241  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.112258  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=old-k8s-version-225140 minikube.k8s.io/updated_at=2023_10_31T00_18_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.144297  248084 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:59.352655  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.464268  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.069316  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.569382  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.481532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:02.978563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:01.069124  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:01.569535  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.069209  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.569292  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.069280  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.569469  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.069050  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.569082  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.068795  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.569625  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.479444  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:07.980592  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:09.982873  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:06.069318  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:06.569043  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.069599  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.569098  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.069690  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.569668  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.069735  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.569294  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.069080  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.569441  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.068991  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.569543  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.069495  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.568757  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.069012  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.569638  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.789009  248084 kubeadm.go:1081] duration metric: took 14.676828073s to wait for elevateKubeSystemPrivileges.
	I1031 00:19:13.789061  248084 kubeadm.go:406] StartCluster complete in 5m39.716410778s
	I1031 00:19:13.789090  248084 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.789209  248084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:19:13.791883  248084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.792204  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:19:13.792368  248084 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:19:13.792451  248084 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792457  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:19:13.792471  248084 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-225140"
	W1031 00:19:13.792480  248084 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:19:13.792485  248084 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792515  248084 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792531  248084 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:13.792534  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	W1031 00:19:13.792540  248084 addons.go:240] addon metrics-server should already be in state true
	I1031 00:19:13.792568  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.792516  248084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-225140"
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793021  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793104  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793147  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793254  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.811115  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I1031 00:19:13.811377  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I1031 00:19:13.811793  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.811913  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.812411  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812433  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812586  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812636  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812764  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.812833  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35585
	I1031 00:19:13.813035  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.813186  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.813284  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.813624  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.813649  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.813896  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.813938  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.813984  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.814742  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.814791  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.817328  248084 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-225140"
	W1031 00:19:13.817352  248084 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:19:13.817383  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.817651  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.817676  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.831410  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1031 00:19:13.832059  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.832665  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.832686  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.833071  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.833396  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.834672  248084 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-225140" context rescaled to 1 replicas
	I1031 00:19:13.834715  248084 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:19:13.837043  248084 out.go:177] * Verifying Kubernetes components...
	I1031 00:19:13.834927  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1031 00:19:13.835269  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.835504  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I1031 00:19:13.837823  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.838827  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:19:13.840427  248084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:19:13.838307  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.839305  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.842067  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.842200  248084 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:13.842220  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:19:13.842259  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.842518  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.843110  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.843159  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.843539  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.843577  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.844178  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.844488  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.846259  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.846704  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.848811  248084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:19:12.479334  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:14.484105  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:13.847143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.847192  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.850295  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.850300  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:19:13.850319  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:19:13.850341  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.850537  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.850712  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.851115  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.853651  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854192  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.854226  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854563  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.854758  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.854967  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.855112  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.862473  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I1031 00:19:13.862970  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.863496  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.863526  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.864026  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.864257  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.866270  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.866530  248084 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:13.866546  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:19:13.866565  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.870580  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.870992  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.871028  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.871142  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.871372  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.871542  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.871678  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:14.034938  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:14.040988  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:19:14.041016  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:19:14.061666  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:14.111727  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:19:14.111758  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:19:14.125610  248084 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.125707  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:19:14.165369  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:14.165397  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:19:14.193366  248084 node_ready.go:49] node "old-k8s-version-225140" has status "Ready":"True"
	I1031 00:19:14.193389  248084 node_ready.go:38] duration metric: took 67.750717ms waiting for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.193401  248084 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:14.207505  248084 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:14.276613  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:15.572065  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.537074399s)
	I1031 00:19:15.572136  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572152  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572177  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.510470973s)
	I1031 00:19:15.572219  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572238  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572336  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.446596481s)
	I1031 00:19:15.572363  248084 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1031 00:19:15.572603  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572621  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572632  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572642  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572697  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572711  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572757  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572778  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572756  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572908  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572910  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572970  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.573533  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.573554  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586186  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.586210  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.586507  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.586530  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586546  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.700772  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.424096792s)
	I1031 00:19:15.700835  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.700851  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701196  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701217  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701230  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.701242  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701531  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.701561  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701574  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701585  248084 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:15.703404  248084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:19:15.704856  248084 addons.go:502] enable addons completed in 1.91251063s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:19:16.980629  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:19.478989  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:16.278623  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:18.779192  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.978882  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.981260  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.276797  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.277531  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.277561  248084 pod_ready.go:81] duration metric: took 9.070020963s waiting for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.277575  248084 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283345  248084 pod_ready.go:92] pod "kube-proxy-v2pp4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.283367  248084 pod_ready.go:81] duration metric: took 5.78532ms waiting for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283374  248084 pod_ready.go:38] duration metric: took 9.089964646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:23.283394  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:19:23.283452  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:19:23.300275  248084 api_server.go:72] duration metric: took 9.465522842s to wait for apiserver process to appear ...
	I1031 00:19:23.300294  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:19:23.300308  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:19:23.309064  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:19:23.310485  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:19:23.310508  248084 api_server.go:131] duration metric: took 10.207384ms to wait for apiserver health ...
	I1031 00:19:23.310517  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:19:23.314181  248084 system_pods.go:59] 4 kube-system pods found
	I1031 00:19:23.314205  248084 system_pods.go:61] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.314210  248084 system_pods.go:61] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.314217  248084 system_pods.go:61] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.314224  248084 system_pods.go:61] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.314230  248084 system_pods.go:74] duration metric: took 3.706807ms to wait for pod list to return data ...
	I1031 00:19:23.314236  248084 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:19:23.316411  248084 default_sa.go:45] found service account: "default"
	I1031 00:19:23.316435  248084 default_sa.go:55] duration metric: took 2.192647ms for default service account to be created ...
	I1031 00:19:23.316443  248084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:19:23.320111  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.320137  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.320148  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.320159  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.320167  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.320190  248084 retry.go:31] will retry after 199.965979ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.524726  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.524754  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.524760  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.524766  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.524773  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.524788  248084 retry.go:31] will retry after 276.623866ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.807038  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.807066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.807072  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.807080  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.807087  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.807104  248084 retry.go:31] will retry after 316.245952ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.128239  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.128268  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.128277  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.128287  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.128297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.128326  248084 retry.go:31] will retry after 483.558456ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.616454  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.616486  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.616494  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.616505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.616514  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.616534  248084 retry.go:31] will retry after 700.807178ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:25.323617  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:25.323666  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:25.323675  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:25.323687  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:25.323697  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:25.323718  248084 retry.go:31] will retry after 768.27646ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:26.485923  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:28.978283  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:26.097257  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:26.097283  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:26.097288  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:26.097295  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:26.097302  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:26.097320  248084 retry.go:31] will retry after 1.004884505s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:27.108295  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:27.108330  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:27.108339  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:27.108350  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:27.108360  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:27.108380  248084 retry.go:31] will retry after 1.256932803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:28.369629  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:28.369668  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:28.369677  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:28.369688  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:28.369698  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:28.369722  248084 retry.go:31] will retry after 1.554545012s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:29.930268  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:29.930295  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:29.930314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:29.930322  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:29.930338  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:29.930358  248084 retry.go:31] will retry after 1.794325328s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:30.981402  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:33.478794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:31.729473  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:31.729511  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:31.729520  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:31.729531  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:31.729542  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:31.729563  248084 retry.go:31] will retry after 2.111450847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:33.846759  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:33.846787  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:33.846792  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:33.846801  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:33.846807  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:33.846824  248084 retry.go:31] will retry after 2.198886772s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:35.981890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:38.478284  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:36.050460  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:36.050491  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:36.050496  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:36.050505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:36.050512  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:36.050530  248084 retry.go:31] will retry after 3.361148685s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:39.417603  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:39.417633  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:39.417640  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:39.417651  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:39.417660  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:39.417680  248084 retry.go:31] will retry after 4.41093106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:40.978990  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.479103  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.834041  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:43.834083  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:43.834093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:43.834104  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:43.834115  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:43.834134  248084 retry.go:31] will retry after 5.294476287s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:45.482986  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:47.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.980183  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.133233  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:49.133264  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:49.133269  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:49.133276  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:49.133284  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:49.133300  248084 retry.go:31] will retry after 7.429511286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:51.980355  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:53.981222  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.480456  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:58.979640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.567247  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:56.567278  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:56.567284  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:56.567290  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:56.567297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:56.567314  248084 retry.go:31] will retry after 10.944177906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:01.477606  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:03.481220  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:05.979560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.984688  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.518274  248084 system_pods.go:86] 7 kube-system pods found
	I1031 00:20:07.518300  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:07.518306  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Pending
	I1031 00:20:07.518310  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Pending
	I1031 00:20:07.518314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:07.518318  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Pending
	I1031 00:20:07.518325  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:07.518331  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:07.518349  248084 retry.go:31] will retry after 8.381829497s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:10.485015  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:12.978647  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.479489  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:17.980834  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.906034  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:15.906066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:15.906074  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Pending
	I1031 00:20:15.906080  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:15.906087  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:15.906093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:15.906100  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:15.906109  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:15.906120  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:15.906138  248084 retry.go:31] will retry after 11.167332732s: missing components: etcd
	I1031 00:20:20.481147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:22.980858  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:24.982265  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:27.080224  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:27.080263  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:27.080272  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Running
	I1031 00:20:27.080279  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:27.080287  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:27.080294  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:27.080301  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:27.080318  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:27.080332  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:27.080343  248084 system_pods.go:126] duration metric: took 1m3.763892339s to wait for k8s-apps to be running ...
	I1031 00:20:27.080357  248084 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:20:27.080408  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:20:27.098039  248084 system_svc.go:56] duration metric: took 17.670849ms WaitForService to wait for kubelet.
	I1031 00:20:27.098075  248084 kubeadm.go:581] duration metric: took 1m13.263332949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:20:27.098105  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:20:27.101093  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:20:27.101126  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:20:27.101182  248084 node_conditions.go:105] duration metric: took 3.066191ms to run NodePressure ...
	I1031 00:20:27.101198  248084 start.go:228] waiting for startup goroutines ...
	I1031 00:20:27.101208  248084 start.go:233] waiting for cluster config update ...
	I1031 00:20:27.101222  248084 start.go:242] writing updated cluster config ...
	I1031 00:20:27.101586  248084 ssh_runner.go:195] Run: rm -f paused
	I1031 00:20:27.157211  248084 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1031 00:20:27.159327  248084 out.go:177] 
	W1031 00:20:27.160872  248084 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1031 00:20:27.163644  248084 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1031 00:20:27.165443  248084 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-225140" cluster and "default" namespace by default
	I1031 00:20:27.481582  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:29.978812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:32.478965  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:34.479052  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:36.486487  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:38.981098  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:41.478500  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:43.478933  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:45.978794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:47.978937  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:49.980825  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:52.479268  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:54.978422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:57.478476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:59.478602  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:01.478639  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:03.479969  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:05.978907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:08.478656  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:10.978877  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:12.981683  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:15.479094  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:17.978893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:20.479878  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:22.483287  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:24.978077  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:26.979122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:28.981476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:31.478577  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:33.479816  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:35.979787  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:37.981859  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:40.477762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:42.479382  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:44.479508  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:46.479851  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:48.482610  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:49.171002  248387 pod_ready.go:81] duration metric: took 4m0.000595541s waiting for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	E1031 00:21:49.171048  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:21:49.171063  248387 pod_ready.go:38] duration metric: took 4m2.795014386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:21:49.171097  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:21:49.171149  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:21:49.171248  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:21:49.226512  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.226543  248387 cri.go:89] found id: ""
	I1031 00:21:49.226555  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:21:49.226647  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.230993  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:21:49.231060  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:21:49.270646  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:49.270677  248387 cri.go:89] found id: ""
	I1031 00:21:49.270688  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:21:49.270760  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.275165  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:21:49.275225  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:21:49.317730  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:49.317757  248387 cri.go:89] found id: ""
	I1031 00:21:49.317768  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:21:49.317818  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.322362  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:21:49.322430  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:21:49.361430  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.361462  248387 cri.go:89] found id: ""
	I1031 00:21:49.361474  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:21:49.361535  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.365642  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:21:49.365713  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:21:49.409230  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:49.409258  248387 cri.go:89] found id: ""
	I1031 00:21:49.409269  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:21:49.409329  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.413540  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:21:49.413622  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:21:49.458477  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:49.458506  248387 cri.go:89] found id: ""
	I1031 00:21:49.458518  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:21:49.458586  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.462471  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:21:49.462540  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:21:49.498272  248387 cri.go:89] found id: ""
	I1031 00:21:49.498299  248387 logs.go:284] 0 containers: []
	W1031 00:21:49.498309  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:21:49.498316  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:21:49.498386  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:21:49.538677  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.538704  248387 cri.go:89] found id: ""
	I1031 00:21:49.538714  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:21:49.538776  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.544293  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:21:49.544318  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:21:49.719505  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:21:49.719542  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.770108  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:21:49.770146  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.826250  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:21:49.826289  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.864212  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:21:49.864244  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:21:50.278307  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:21:50.278348  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:21:50.332860  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:21:50.332894  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:21:50.413002  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413224  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413368  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413524  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.435703  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:21:50.435739  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:21:50.451836  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:21:50.451865  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:50.493883  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:21:50.493912  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:50.533935  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:21:50.533967  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:50.582053  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:21:50.582094  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:50.638988  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639021  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:21:50.639177  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:21:50.639191  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639201  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639213  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639219  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.639225  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639232  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:00.639748  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:22:00.663810  248387 api_server.go:72] duration metric: took 4m16.69659563s to wait for apiserver process to appear ...
	I1031 00:22:00.663846  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:22:00.663904  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:00.663980  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:00.705584  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:00.705611  248387 cri.go:89] found id: ""
	I1031 00:22:00.705620  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:00.705672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.710031  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:00.710113  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:00.747821  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:00.747850  248387 cri.go:89] found id: ""
	I1031 00:22:00.747861  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:00.747926  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.752647  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:00.752733  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:00.802165  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:00.802200  248387 cri.go:89] found id: ""
	I1031 00:22:00.802210  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:00.802274  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.807367  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:00.807451  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:00.846633  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:00.846661  248387 cri.go:89] found id: ""
	I1031 00:22:00.846670  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:00.846736  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.851197  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:00.851282  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:00.891522  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:00.891549  248387 cri.go:89] found id: ""
	I1031 00:22:00.891559  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:00.891624  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.896269  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:00.896369  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:00.937565  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:00.937594  248387 cri.go:89] found id: ""
	I1031 00:22:00.937606  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:00.937672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.942205  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:00.942287  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:00.984788  248387 cri.go:89] found id: ""
	I1031 00:22:00.984814  248387 logs.go:284] 0 containers: []
	W1031 00:22:00.984821  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:00.984827  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:00.984883  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:01.032572  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.032601  248387 cri.go:89] found id: ""
	I1031 00:22:01.032621  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:01.032685  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:01.037253  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:01.037280  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:01.096027  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:01.096065  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:01.166608  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166786  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166925  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.167075  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:01.188441  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:01.188473  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:01.238925  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:01.238961  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:01.278987  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:01.279024  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:01.340249  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:01.340284  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:01.381155  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:01.381191  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.421808  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:01.421842  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:01.817836  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:01.817877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:01.832590  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:01.832620  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:01.961348  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:01.961384  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:02.023997  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:02.024055  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:02.087279  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087321  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:02.087437  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:02.087460  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087476  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087485  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087495  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:02.087513  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087527  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:12.090012  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:22:12.096458  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:22:12.097833  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:22:12.097860  248387 api_server.go:131] duration metric: took 11.434005759s to wait for apiserver health ...
	I1031 00:22:12.097872  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:22:12.097901  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:12.098004  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:12.161098  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.161129  248387 cri.go:89] found id: ""
	I1031 00:22:12.161140  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:12.161199  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.166236  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:12.166325  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:12.208793  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:12.208815  248387 cri.go:89] found id: ""
	I1031 00:22:12.208824  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:12.208871  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.213722  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:12.213791  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:12.256006  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.256036  248387 cri.go:89] found id: ""
	I1031 00:22:12.256046  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:12.256116  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.260468  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:12.260546  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:12.305580  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.305608  248387 cri.go:89] found id: ""
	I1031 00:22:12.305618  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:12.305687  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.313321  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:12.313390  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:12.359900  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.359928  248387 cri.go:89] found id: ""
	I1031 00:22:12.359939  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:12.360003  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.364087  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:12.364171  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:12.403635  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.403660  248387 cri.go:89] found id: ""
	I1031 00:22:12.403675  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:12.403743  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.408014  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:12.408087  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:12.449718  248387 cri.go:89] found id: ""
	I1031 00:22:12.449741  248387 logs.go:284] 0 containers: []
	W1031 00:22:12.449748  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:12.449753  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:12.449802  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:12.490301  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.490330  248387 cri.go:89] found id: ""
	I1031 00:22:12.490340  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:12.490396  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.495061  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:12.495125  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.537124  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:12.537163  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.597600  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:12.597642  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.637344  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:12.637385  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:12.691076  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:12.691107  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:12.820546  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:12.820578  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.871913  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:12.871953  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.914661  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:12.914705  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.965771  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:12.965810  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:13.352819  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:13.352862  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:13.424722  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.424906  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425062  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425220  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.447363  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:13.447393  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:13.462468  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:13.462502  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:13.507930  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.507960  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:13.508045  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:13.508060  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508072  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508084  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508097  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.508107  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.508114  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:23.516544  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:22:23.516574  248387 system_pods.go:61] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.516579  248387 system_pods.go:61] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.516584  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.516588  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.516592  248387 system_pods.go:61] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.516597  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.516604  248387 system_pods.go:61] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.516613  248387 system_pods.go:61] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.516620  248387 system_pods.go:74] duration metric: took 11.418741675s to wait for pod list to return data ...
	I1031 00:22:23.516630  248387 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:22:23.520026  248387 default_sa.go:45] found service account: "default"
	I1031 00:22:23.520050  248387 default_sa.go:55] duration metric: took 3.413856ms for default service account to be created ...
	I1031 00:22:23.520058  248387 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:22:23.526672  248387 system_pods.go:86] 8 kube-system pods found
	I1031 00:22:23.526704  248387 system_pods.go:89] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.526712  248387 system_pods.go:89] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.526719  248387 system_pods.go:89] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.526729  248387 system_pods.go:89] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.526736  248387 system_pods.go:89] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.526753  248387 system_pods.go:89] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.526765  248387 system_pods.go:89] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.526776  248387 system_pods.go:89] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.526789  248387 system_pods.go:126] duration metric: took 6.724214ms to wait for k8s-apps to be running ...
	I1031 00:22:23.526801  248387 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:22:23.526862  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:22:23.546006  248387 system_svc.go:56] duration metric: took 19.183151ms WaitForService to wait for kubelet.
	I1031 00:22:23.546038  248387 kubeadm.go:581] duration metric: took 4m39.57883274s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:22:23.546066  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:22:23.550930  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:22:23.550975  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:22:23.551004  248387 node_conditions.go:105] duration metric: took 4.930974ms to run NodePressure ...
	I1031 00:22:23.551041  248387 start.go:228] waiting for startup goroutines ...
	I1031 00:22:23.551053  248387 start.go:233] waiting for cluster config update ...
	I1031 00:22:23.551064  248387 start.go:242] writing updated cluster config ...
	I1031 00:22:23.551346  248387 ssh_runner.go:195] Run: rm -f paused
	I1031 00:22:23.603812  248387 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:22:23.605925  248387 out.go:177] * Done! kubectl is now configured to use "no-preload-640155" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 00:12:28 UTC, ends at Tue 2023-10-31 00:26:27 UTC. --
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.207729872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698711987207713637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=043afc3d-0765-4c14-b035-fece67134f9c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.208313059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bf6f9ff2-575a-4efb-97d8-d8903caaee08 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.208386081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bf6f9ff2-575a-4efb-97d8-d8903caaee08 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.208578664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711214186674561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776cfd1370a2ecd2ebd919bf887815461feca2c3604f89b31255cfcadd84f3,PodSandboxId:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698711192442163192,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{io.kubernetes.container.hash: ff541a11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26,PodSandboxId:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711190773895170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1cb5b569,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698711183249808394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3,PodSandboxId:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711183124950068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff
79-4cd8-ab26-a4ca2bec1fd9,},Annotations:map[string]string{io.kubernetes.container.hash: 404a6c81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6,PodSandboxId:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711177512324378,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,},Annotations:map[string
]string{io.kubernetes.container.hash: d3bd4104,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80,PodSandboxId:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711177266496863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,},Annotations:map[string]string{io
.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70,PodSandboxId:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711177144498313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033,PodSandboxId:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711177026766214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,},Annotations:map[
string]string{io.kubernetes.container.hash: 28ddfe21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bf6f9ff2-575a-4efb-97d8-d8903caaee08 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.254485977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1b768a29-ff55-430f-b7c9-11ed5e572d53 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.254576178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1b768a29-ff55-430f-b7c9-11ed5e572d53 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.256489635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=29650872-6c48-4d0a-87d6-8320ea97f71e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.256930711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698711987256902703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=29650872-6c48-4d0a-87d6-8320ea97f71e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.257485330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3f6a808c-780d-4f3d-b622-13532353b909 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.257564660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3f6a808c-780d-4f3d-b622-13532353b909 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.257747945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711214186674561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776cfd1370a2ecd2ebd919bf887815461feca2c3604f89b31255cfcadd84f3,PodSandboxId:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698711192442163192,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{io.kubernetes.container.hash: ff541a11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26,PodSandboxId:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711190773895170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1cb5b569,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698711183249808394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3,PodSandboxId:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711183124950068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff
79-4cd8-ab26-a4ca2bec1fd9,},Annotations:map[string]string{io.kubernetes.container.hash: 404a6c81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6,PodSandboxId:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711177512324378,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,},Annotations:map[string
]string{io.kubernetes.container.hash: d3bd4104,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80,PodSandboxId:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711177266496863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,},Annotations:map[string]string{io
.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70,PodSandboxId:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711177144498313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033,PodSandboxId:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711177026766214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,},Annotations:map[
string]string{io.kubernetes.container.hash: 28ddfe21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3f6a808c-780d-4f3d-b622-13532353b909 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.297139917Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3202ef34-bcaa-4b6d-9fb6-8463a28d10b3 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.297228200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3202ef34-bcaa-4b6d-9fb6-8463a28d10b3 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.298470573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6cdbb284-241d-4fff-ac39-9ab7cb5000ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.298979182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698711987298962466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6cdbb284-241d-4fff-ac39-9ab7cb5000ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.299951098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b3b137e6-a617-4da4-a6c5-dd26893edde4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.300090002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b3b137e6-a617-4da4-a6c5-dd26893edde4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.300312281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711214186674561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776cfd1370a2ecd2ebd919bf887815461feca2c3604f89b31255cfcadd84f3,PodSandboxId:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698711192442163192,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{io.kubernetes.container.hash: ff541a11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26,PodSandboxId:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711190773895170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1cb5b569,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698711183249808394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3,PodSandboxId:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711183124950068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff
79-4cd8-ab26-a4ca2bec1fd9,},Annotations:map[string]string{io.kubernetes.container.hash: 404a6c81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6,PodSandboxId:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711177512324378,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,},Annotations:map[string
]string{io.kubernetes.container.hash: d3bd4104,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80,PodSandboxId:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711177266496863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,},Annotations:map[string]string{io
.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70,PodSandboxId:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711177144498313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033,PodSandboxId:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711177026766214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,},Annotations:map[
string]string{io.kubernetes.container.hash: 28ddfe21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b3b137e6-a617-4da4-a6c5-dd26893edde4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.346110114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1e601ffd-717b-4db7-ae4b-e0cc3676aeb5 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.346200891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1e601ffd-717b-4db7-ae4b-e0cc3676aeb5 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.348904923Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d3628c74-60cd-4c69-a097-4fcde36d72fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.349493259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698711987349453293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d3628c74-60cd-4c69-a097-4fcde36d72fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.350231328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9fcf63f6-c752-4183-9f6c-cf4eb6d6af5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.350303052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9fcf63f6-c752-4183-9f6c-cf4eb6d6af5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:26:27 embed-certs-078843 crio[711]: time="2023-10-31 00:26:27.350497437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711214186674561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776cfd1370a2ecd2ebd919bf887815461feca2c3604f89b31255cfcadd84f3,PodSandboxId:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698711192442163192,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{io.kubernetes.container.hash: ff541a11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26,PodSandboxId:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711190773895170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1cb5b569,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698711183249808394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3,PodSandboxId:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711183124950068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff
79-4cd8-ab26-a4ca2bec1fd9,},Annotations:map[string]string{io.kubernetes.container.hash: 404a6c81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6,PodSandboxId:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711177512324378,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,},Annotations:map[string
]string{io.kubernetes.container.hash: d3bd4104,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80,PodSandboxId:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711177266496863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,},Annotations:map[string]string{io
.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70,PodSandboxId:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711177144498313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033,PodSandboxId:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711177026766214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,},Annotations:map[
string]string{io.kubernetes.container.hash: 28ddfe21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9fcf63f6-c752-4183-9f6c-cf4eb6d6af5c name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86e0b59eda801       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   4f4af887bf59e       storage-provisioner
	ff776cfd1370a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   4628a58fa00c1       busybox
	8e049ebc03e12       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   9d31d8abd8f4e       coredns-5dd5756b68-dqrs4
	622298cd36157       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   4f4af887bf59e       storage-provisioner
	f52fe11ae8422       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      13 minutes ago      Running             kube-proxy                1                   e20b5a6f9a35d       kube-proxy-287dq
	35bf5adca8564       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   daf5d500c92cb       etcd-embed-certs-078843
	ee4cc3844ed36       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      13 minutes ago      Running             kube-scheduler            1                   9c78a5ff74b93       kube-scheduler-embed-certs-078843
	4622dc85f3882       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      13 minutes ago      Running             kube-controller-manager   1                   0663bfc12e03a       kube-controller-manager-embed-certs-078843
	bb31ab0db497f       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      13 minutes ago      Running             kube-apiserver            1                   0823b451eb5f8       kube-apiserver-embed-certs-078843
	
	* 
	* ==> coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54179 - 36798 "HINFO IN 2334349160939681849.7017679136187254627. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011791188s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-078843
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-078843
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=embed-certs-078843
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T00_04_59_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 00:04:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-078843
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 00:26:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:23:44 +0000   Tue, 31 Oct 2023 00:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:23:44 +0000   Tue, 31 Oct 2023 00:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:23:44 +0000   Tue, 31 Oct 2023 00:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:23:44 +0000   Tue, 31 Oct 2023 00:13:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.2
	  Hostname:    embed-certs-078843
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7431126be6a247cb89e27d326eef3e05
	  System UUID:                7431126b-e6a2-47cb-89e2-7d326eef3e05
	  Boot ID:                    7caa986b-82b9-47f7-ae69-a57fee90e2a7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-dqrs4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-078843                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-078843             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-078843    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-287dq                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-078843             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-pm6qx               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-078843 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-078843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-078843 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-078843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-078843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-078843 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node embed-certs-078843 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-078843 event: Registered Node embed-certs-078843 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-078843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-078843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-078843 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-078843 event: Registered Node embed-certs-078843 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct31 00:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068883] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.425685] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.467577] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.159282] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.503542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.638603] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.112994] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.161515] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.125361] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.230014] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +17.569942] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[Oct31 00:13] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] <==
	* {"level":"warn","ts":"2023-10-31T00:13:09.631824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.651127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-078843\" ","response":"range_response_count:1 size:5683"}
	{"level":"info","ts":"2023-10-31T00:13:09.634568Z","caller":"traceutil/trace.go:171","msg":"trace[1099389080] range","detail":"{range_begin:/registry/minions/embed-certs-078843; range_end:; response_count:1; response_revision:590; }","duration":"215.394485ms","start":"2023-10-31T00:13:09.419158Z","end":"2023-10-31T00:13:09.634553Z","steps":["trace[1099389080] 'agreement among raft nodes before linearized reading'  (duration: 212.572068ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:13:09.634477Z","caller":"traceutil/trace.go:171","msg":"trace[911985954] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"502.539858ms","start":"2023-10-31T00:13:09.131922Z","end":"2023-10-31T00:13:09.634462Z","steps":["trace[911985954] 'process raft request'  (duration: 380.797773ms)","trace[911985954] 'compare'  (duration: 118.190533ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-31T00:13:09.635326Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:13:09.131908Z","time spent":"503.293747ms","remote":"127.0.0.1:38708","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6515,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-078843\" mod_revision:589 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-078843\" value_size:6447 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-078843\" > >"}
	{"level":"info","ts":"2023-10-31T00:13:09.772143Z","caller":"traceutil/trace.go:171","msg":"trace[1701541557] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"113.364677ms","start":"2023-10-31T00:13:09.65876Z","end":"2023-10-31T00:13:09.772124Z","steps":["trace[1701541557] 'process raft request'  (duration: 111.728709ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:13:32.164644Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.704077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-pm6qx\" ","response":"range_response_count:1 size:4025"}
	{"level":"info","ts":"2023-10-31T00:13:32.164969Z","caller":"traceutil/trace.go:171","msg":"trace[1422042551] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-pm6qx; range_end:; response_count:1; response_revision:629; }","duration":"127.047251ms","start":"2023-10-31T00:13:32.037909Z","end":"2023-10-31T00:13:32.164956Z","steps":["trace[1422042551] 'range keys from in-memory index tree'  (duration: 126.522595ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:13:32.567578Z","caller":"traceutil/trace.go:171","msg":"trace[1241615418] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"384.63353ms","start":"2023-10-31T00:13:32.182924Z","end":"2023-10-31T00:13:32.567558Z","steps":["trace[1241615418] 'process raft request'  (duration: 384.493255ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:13:32.567736Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:13:32.182889Z","time spent":"384.778452ms","remote":"127.0.0.1:38726","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":560,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-078843\" mod_revision:622 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-078843\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-078843\" > >"}
	{"level":"warn","ts":"2023-10-31T00:13:33.247842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.416821ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2569456982500035559 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/embed-certs-078843\" mod_revision:608 > success:<request_put:<key:\"/registry/minions/embed-certs-078843\" value_size:5699 >> failure:<request_range:<key:\"/registry/minions/embed-certs-078843\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-31T00:13:33.247953Z","caller":"traceutil/trace.go:171","msg":"trace[92540663] linearizableReadLoop","detail":"{readStateIndex:679; appliedIndex:678; }","duration":"708.421372ms","start":"2023-10-31T00:13:32.539518Z","end":"2023-10-31T00:13:33.24794Z","steps":["trace[92540663] 'read index received'  (duration: 28.321217ms)","trace[92540663] 'applied index is now lower than readState.Index'  (duration: 680.09847ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-31T00:13:33.248212Z","caller":"traceutil/trace.go:171","msg":"trace[708673114] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"841.301268ms","start":"2023-10-31T00:13:32.406899Z","end":"2023-10-31T00:13:33.2482Z","steps":["trace[708673114] 'process raft request'  (duration: 427.367032ms)","trace[708673114] 'compare'  (duration: 413.271653ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-31T00:13:33.248273Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:13:32.406884Z","time spent":"841.350896ms","remote":"127.0.0.1:38704","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5743,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/embed-certs-078843\" mod_revision:608 > success:<request_put:<key:\"/registry/minions/embed-certs-078843\" value_size:5699 >> failure:<request_range:<key:\"/registry/minions/embed-certs-078843\" > >"}
	{"level":"warn","ts":"2023-10-31T00:13:33.248415Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"708.906728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-pm6qx\" ","response":"range_response_count:1 size:4025"}
	{"level":"info","ts":"2023-10-31T00:13:33.248436Z","caller":"traceutil/trace.go:171","msg":"trace[1652761747] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-pm6qx; range_end:; response_count:1; response_revision:631; }","duration":"708.936966ms","start":"2023-10-31T00:13:32.539493Z","end":"2023-10-31T00:13:33.24843Z","steps":["trace[1652761747] 'agreement among raft nodes before linearized reading'  (duration: 708.883255ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:13:33.248456Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:13:32.53948Z","time spent":"708.971382ms","remote":"127.0.0.1:38708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4048,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-pm6qx\" "}
	{"level":"warn","ts":"2023-10-31T00:13:33.248575Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.760123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2023-10-31T00:13:33.248599Z","caller":"traceutil/trace.go:171","msg":"trace[1841375359] range","detail":"{range_begin:/registry/masterleases/192.168.50.2; range_end:; response_count:1; response_revision:631; }","duration":"382.783811ms","start":"2023-10-31T00:13:32.865809Z","end":"2023-10-31T00:13:33.248593Z","steps":["trace[1841375359] 'agreement among raft nodes before linearized reading'  (duration: 382.739704ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:13:33.248617Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:13:32.865797Z","time spent":"382.815915ms","remote":"127.0.0.1:38666","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":154,"request content":"key:\"/registry/masterleases/192.168.50.2\" "}
	{"level":"info","ts":"2023-10-31T00:13:33.39536Z","caller":"traceutil/trace.go:171","msg":"trace[1139944266] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"138.876465ms","start":"2023-10-31T00:13:33.256463Z","end":"2023-10-31T00:13:33.395339Z","steps":["trace[1139944266] 'read index received'  (duration: 126.313353ms)","trace[1139944266] 'applied index is now lower than readState.Index'  (duration: 12.561612ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-31T00:13:33.395494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.022641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-078843\" ","response":"range_response_count:1 size:5757"}
	{"level":"info","ts":"2023-10-31T00:13:33.395545Z","caller":"traceutil/trace.go:171","msg":"trace[651639044] range","detail":"{range_begin:/registry/minions/embed-certs-078843; range_end:; response_count:1; response_revision:631; }","duration":"139.091664ms","start":"2023-10-31T00:13:33.256444Z","end":"2023-10-31T00:13:33.395536Z","steps":["trace[651639044] 'agreement among raft nodes before linearized reading'  (duration: 138.990335ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:22:59.764972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":864}
	{"level":"info","ts":"2023-10-31T00:22:59.768243Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":864,"took":"2.56006ms","hash":192751056}
	{"level":"info","ts":"2023-10-31T00:22:59.768338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":192751056,"revision":864,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  00:26:27 up 14 min,  0 users,  load average: 0.07, 0.11, 0.09
	Linux embed-certs-078843 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] <==
	* I1031 00:23:01.570242       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:23:02.569844       1 handler_proxy.go:93] no RequestInfo found in the context
	W1031 00:23:02.570076       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:23:02.570179       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:23:02.570188       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1031 00:23:02.570078       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:23:02.571604       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:24:01.414496       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:24:02.571174       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:24:02.571298       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:24:02.571308       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:24:02.572260       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:24:02.572371       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:24:02.572403       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:25:01.413839       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1031 00:26:01.413627       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:26:02.571928       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:26:02.572176       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:26:02.572223       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:26:02.573068       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:26:02.573172       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:26:02.573231       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] <==
	* I1031 00:20:45.335795       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:21:14.831844       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:21:15.345609       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:21:44.836924       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:21:45.355062       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:22:14.845075       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:22:15.365490       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:22:44.851296       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:22:45.374612       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:23:14.858680       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:23:15.385782       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:23:44.865362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:23:45.394413       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:24:02.980295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="382.13µs"
	E1031 00:24:14.871421       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:24:15.404503       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:24:17.982609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="186.366µs"
	E1031 00:24:44.878920       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:24:45.412532       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:25:14.885063       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:25:15.422710       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:25:44.890747       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:25:45.431558       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:26:14.896923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:26:15.440509       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] <==
	* I1031 00:13:03.997318       1 server_others.go:69] "Using iptables proxy"
	I1031 00:13:04.039269       1 node.go:141] Successfully retrieved node IP: 192.168.50.2
	I1031 00:13:04.287163       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 00:13:04.287227       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 00:13:04.315503       1 server_others.go:152] "Using iptables Proxier"
	I1031 00:13:04.317867       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 00:13:04.319107       1 server.go:846] "Version info" version="v1.28.3"
	I1031 00:13:04.319269       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:13:04.336942       1 config.go:315] "Starting node config controller"
	I1031 00:13:04.337205       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 00:13:04.338653       1 config.go:188] "Starting service config controller"
	I1031 00:13:04.338706       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 00:13:04.338748       1 config.go:97] "Starting endpoint slice config controller"
	I1031 00:13:04.338771       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 00:13:04.438098       1 shared_informer.go:318] Caches are synced for node config
	I1031 00:13:04.438935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 00:13:04.439092       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] <==
	* I1031 00:12:59.652169       1 serving.go:348] Generated self-signed cert in-memory
	W1031 00:13:01.525481       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1031 00:13:01.525601       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 00:13:01.525688       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1031 00:13:01.525695       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1031 00:13:01.583193       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1031 00:13:01.583240       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:13:01.584995       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1031 00:13:01.587752       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1031 00:13:01.587814       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 00:13:01.587830       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1031 00:13:01.688149       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 00:12:28 UTC, ends at Tue 2023-10-31 00:26:27 UTC. --
	Oct 31 00:23:48 embed-certs-078843 kubelet[918]: E1031 00:23:48.973362     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:23:55 embed-certs-078843 kubelet[918]: E1031 00:23:55.972086     918 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:23:55 embed-certs-078843 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:23:55 embed-certs-078843 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:23:55 embed-certs-078843 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:24:02 embed-certs-078843 kubelet[918]: E1031 00:24:02.960602     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:24:17 embed-certs-078843 kubelet[918]: E1031 00:24:17.961401     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:24:29 embed-certs-078843 kubelet[918]: E1031 00:24:29.960477     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:24:42 embed-certs-078843 kubelet[918]: E1031 00:24:42.960108     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:24:54 embed-certs-078843 kubelet[918]: E1031 00:24:54.960950     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:24:55 embed-certs-078843 kubelet[918]: E1031 00:24:55.972932     918 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:24:55 embed-certs-078843 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:24:55 embed-certs-078843 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:24:55 embed-certs-078843 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:25:07 embed-certs-078843 kubelet[918]: E1031 00:25:07.960664     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:25:21 embed-certs-078843 kubelet[918]: E1031 00:25:21.960886     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:25:34 embed-certs-078843 kubelet[918]: E1031 00:25:34.959896     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:25:45 embed-certs-078843 kubelet[918]: E1031 00:25:45.960982     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:25:55 embed-certs-078843 kubelet[918]: E1031 00:25:55.977582     918 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:25:55 embed-certs-078843 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:25:55 embed-certs-078843 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:25:55 embed-certs-078843 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:25:56 embed-certs-078843 kubelet[918]: E1031 00:25:56.960083     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:26:11 embed-certs-078843 kubelet[918]: E1031 00:26:11.960859     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:26:25 embed-certs-078843 kubelet[918]: E1031 00:26:25.963285     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	
	* 
	* ==> storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] <==
	* I1031 00:13:03.691437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1031 00:13:33.750927       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] <==
	* I1031 00:13:34.326323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 00:13:34.352712       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 00:13:34.352922       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 00:13:51.755808       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 00:13:51.756196       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-078843_f48dbb48-29f3-4d64-a9e0-34066179c473!
	I1031 00:13:51.759230       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a3d186e-da90-4734-84c0-9ae37e0e9998", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-078843_f48dbb48-29f3-4d64-a9e0-34066179c473 became leader
	I1031 00:13:51.857296       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-078843_f48dbb48-29f3-4d64-a9e0-34066179c473!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-078843 -n embed-certs-078843
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-078843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pm6qx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-078843 describe pod metrics-server-57f55c9bc5-pm6qx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-078843 describe pod metrics-server-57f55c9bc5-pm6qx: exit status 1 (81.171197ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pm6qx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-078843 describe pod metrics-server-57f55c9bc5-pm6qx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1031 00:19:14.583659  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1031 00:19:30.631758  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-31 00:27:26.3200719 +0000 UTC m=+5148.570088822
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-892233 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-892233 logs -n 25: (1.679756166s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-options-344463                                 | cert-options-344463          | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:02 UTC |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-225140        | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-640155             | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:06 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-078843            | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221554 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | disable-driver-mounts-221554                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:07 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-225140             | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:20 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-892233  | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-640155                  | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:22 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-078843                 | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC | 31 Oct 23 00:17 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-892233       | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC | 31 Oct 23 00:18 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:09:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:09:59.171110  249055 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:09:59.171372  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171383  249055 out.go:309] Setting ErrFile to fd 2...
	I1031 00:09:59.171387  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171591  249055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:09:59.172151  249055 out.go:303] Setting JSON to false
	I1031 00:09:59.173091  249055 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28351,"bootTime":1698682648,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:09:59.173154  249055 start.go:138] virtualization: kvm guest
	I1031 00:09:59.175712  249055 out.go:177] * [default-k8s-diff-port-892233] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:09:59.177218  249055 notify.go:220] Checking for updates...
	I1031 00:09:59.177238  249055 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:09:59.178590  249055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:09:59.179936  249055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:09:59.181243  249055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:09:59.182619  249055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:09:59.184021  249055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:09:59.185755  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:09:59.186187  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.186242  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.200537  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I1031 00:09:59.201002  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.201576  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.201596  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.201949  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.202159  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.202362  249055 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:09:59.202635  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.202680  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.216197  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I1031 00:09:59.216575  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.216998  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.217027  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.217349  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.217537  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.250565  249055 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 00:09:59.251974  249055 start.go:298] selected driver: kvm2
	I1031 00:09:59.251988  249055 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.252123  249055 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:09:59.253132  249055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.253220  249055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:09:59.266948  249055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:09:59.267297  249055 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 00:09:59.267362  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:09:59.267383  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:09:59.267401  249055 start_flags.go:323] config:
	{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-89223
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.267557  249055 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.269225  249055 out.go:177] * Starting control plane node default-k8s-diff-port-892233 in cluster default-k8s-diff-port-892233
	I1031 00:09:57.481224  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:00.553221  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:09:59.270407  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:09:59.270449  249055 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:09:59.270460  249055 cache.go:56] Caching tarball of preloaded images
	I1031 00:09:59.270553  249055 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:09:59.270569  249055 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:09:59.270702  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:09:59.270937  249055 start.go:365] acquiring machines lock for default-k8s-diff-port-892233: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:10:06.633217  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:09.705265  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:15.785240  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:18.857227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:24.937215  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:28.009292  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:34.089205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:37.161208  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:43.241288  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:46.313160  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:52.393273  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:55.465205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:01.545192  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:04.617227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:10.697233  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:13.769258  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:19.849250  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:22.921270  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:29.001178  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:32.073257  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:38.153271  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:41.225244  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:47.305235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:50.377235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:53.381665  248387 start.go:369] acquired machines lock for "no-preload-640155" in 4m7.945210729s
	I1031 00:11:53.381722  248387 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:11:53.381734  248387 fix.go:54] fixHost starting: 
	I1031 00:11:53.382372  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:11:53.382418  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:11:53.397155  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1031 00:11:53.397704  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:11:53.398181  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:11:53.398206  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:11:53.398561  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:11:53.398761  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:11:53.398909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:11:53.400611  248387 fix.go:102] recreateIfNeeded on no-preload-640155: state=Stopped err=<nil>
	I1031 00:11:53.400634  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	W1031 00:11:53.400782  248387 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:11:53.402394  248387 out.go:177] * Restarting existing kvm2 VM for "no-preload-640155" ...
	I1031 00:11:53.403767  248387 main.go:141] libmachine: (no-preload-640155) Calling .Start
	I1031 00:11:53.403944  248387 main.go:141] libmachine: (no-preload-640155) Ensuring networks are active...
	I1031 00:11:53.404678  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network default is active
	I1031 00:11:53.405127  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network mk-no-preload-640155 is active
	I1031 00:11:53.405642  248387 main.go:141] libmachine: (no-preload-640155) Getting domain xml...
	I1031 00:11:53.406300  248387 main.go:141] libmachine: (no-preload-640155) Creating domain...
	I1031 00:11:54.646418  248387 main.go:141] libmachine: (no-preload-640155) Waiting to get IP...
	I1031 00:11:54.647560  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.647956  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.648034  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.647947  249366 retry.go:31] will retry after 237.521879ms: waiting for machine to come up
	I1031 00:11:54.887446  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.887861  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.887895  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.887804  249366 retry.go:31] will retry after 320.996838ms: waiting for machine to come up
	I1031 00:11:53.379251  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:11:53.379302  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:11:53.381458  248084 machine.go:91] provisioned docker machine in 4m37.397131013s
	I1031 00:11:53.381513  248084 fix.go:56] fixHost completed within 4m37.420319931s
	I1031 00:11:53.381528  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 4m37.420354195s
	W1031 00:11:53.381569  248084 start.go:691] error starting host: provision: host is not running
	W1031 00:11:53.381676  248084 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1031 00:11:53.381687  248084 start.go:706] Will try again in 5 seconds ...
	I1031 00:11:55.210309  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.210784  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.210818  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.210728  249366 retry.go:31] will retry after 412.198071ms: waiting for machine to come up
	I1031 00:11:55.624299  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.624689  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.624721  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.624647  249366 retry.go:31] will retry after 596.339141ms: waiting for machine to come up
	I1031 00:11:56.222381  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.222918  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.222952  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.222864  249366 retry.go:31] will retry after 640.775314ms: waiting for machine to come up
	I1031 00:11:56.865881  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.866355  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.866394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.866321  249366 retry.go:31] will retry after 797.697217ms: waiting for machine to come up
	I1031 00:11:57.665413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:57.665930  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:57.665971  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:57.665871  249366 retry.go:31] will retry after 808.934364ms: waiting for machine to come up
	I1031 00:11:58.476161  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:58.476620  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:58.476651  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:58.476582  249366 retry.go:31] will retry after 1.198576442s: waiting for machine to come up
	I1031 00:11:59.676957  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:59.677540  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:59.677575  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:59.677462  249366 retry.go:31] will retry after 1.122967081s: waiting for machine to come up
	I1031 00:11:58.383586  248084 start.go:365] acquiring machines lock for old-k8s-version-225140: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:12:00.801790  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:00.802278  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:00.802313  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:00.802216  249366 retry.go:31] will retry after 2.182263229s: waiting for machine to come up
	I1031 00:12:02.987870  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:02.988307  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:02.988339  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:02.988235  249366 retry.go:31] will retry after 2.73312352s: waiting for machine to come up
	I1031 00:12:05.723196  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:05.723664  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:05.723695  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:05.723595  249366 retry.go:31] will retry after 2.33306923s: waiting for machine to come up
	I1031 00:12:08.060086  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:08.060364  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:08.060394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:08.060328  249366 retry.go:31] will retry after 2.770780436s: waiting for machine to come up
	I1031 00:12:10.834601  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:10.834995  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:10.835020  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:10.834939  249366 retry.go:31] will retry after 4.389090657s: waiting for machine to come up
	I1031 00:12:16.389786  248718 start.go:369] acquired machines lock for "embed-certs-078843" in 3m38.778041195s
	I1031 00:12:16.389855  248718 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:16.389864  248718 fix.go:54] fixHost starting: 
	I1031 00:12:16.390317  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:16.390362  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:16.407875  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I1031 00:12:16.408273  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:16.408842  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:12:16.408870  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:16.409226  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:16.409404  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:16.409574  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:12:16.410975  248718 fix.go:102] recreateIfNeeded on embed-certs-078843: state=Stopped err=<nil>
	I1031 00:12:16.411013  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	W1031 00:12:16.411196  248718 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:16.413529  248718 out.go:177] * Restarting existing kvm2 VM for "embed-certs-078843" ...
	I1031 00:12:16.414858  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Start
	I1031 00:12:16.415041  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring networks are active...
	I1031 00:12:16.415738  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network default is active
	I1031 00:12:16.416116  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network mk-embed-certs-078843 is active
	I1031 00:12:16.416450  248718 main.go:141] libmachine: (embed-certs-078843) Getting domain xml...
	I1031 00:12:16.417190  248718 main.go:141] libmachine: (embed-certs-078843) Creating domain...
	I1031 00:12:15.226912  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227453  248387 main.go:141] libmachine: (no-preload-640155) Found IP for machine: 192.168.61.168
	I1031 00:12:15.227473  248387 main.go:141] libmachine: (no-preload-640155) Reserving static IP address...
	I1031 00:12:15.227513  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has current primary IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227861  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.227890  248387 main.go:141] libmachine: (no-preload-640155) DBG | skip adding static IP to network mk-no-preload-640155 - found existing host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"}
	I1031 00:12:15.227900  248387 main.go:141] libmachine: (no-preload-640155) Reserved static IP address: 192.168.61.168
	I1031 00:12:15.227919  248387 main.go:141] libmachine: (no-preload-640155) Waiting for SSH to be available...
	I1031 00:12:15.227938  248387 main.go:141] libmachine: (no-preload-640155) DBG | Getting to WaitForSSH function...
	I1031 00:12:15.230076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230450  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.230556  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230578  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH client type: external
	I1031 00:12:15.230601  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa (-rw-------)
	I1031 00:12:15.230646  248387 main.go:141] libmachine: (no-preload-640155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:15.230666  248387 main.go:141] libmachine: (no-preload-640155) DBG | About to run SSH command:
	I1031 00:12:15.230678  248387 main.go:141] libmachine: (no-preload-640155) DBG | exit 0
	I1031 00:12:15.316515  248387 main.go:141] libmachine: (no-preload-640155) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:15.316855  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetConfigRaw
	I1031 00:12:15.317658  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.320306  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.320647  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.320679  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.321008  248387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/config.json ...
	I1031 00:12:15.321252  248387 machine.go:88] provisioning docker machine ...
	I1031 00:12:15.321275  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:15.321492  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321669  248387 buildroot.go:166] provisioning hostname "no-preload-640155"
	I1031 00:12:15.321691  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321858  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.324151  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324480  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.324518  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.324849  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325057  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325237  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.325416  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.325795  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.325815  248387 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-640155 && echo "no-preload-640155" | sudo tee /etc/hostname
	I1031 00:12:15.450048  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-640155
	
	I1031 00:12:15.450079  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.452951  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453298  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.453344  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.453657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453800  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453899  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.454055  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.454540  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.454569  248387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-640155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-640155/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-640155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:15.574041  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:15.574072  248387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:15.574104  248387 buildroot.go:174] setting up certificates
	I1031 00:12:15.574116  248387 provision.go:83] configureAuth start
	I1031 00:12:15.574125  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.574451  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.577558  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578020  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.578059  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578197  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.580453  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.580832  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.580876  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.581078  248387 provision.go:138] copyHostCerts
	I1031 00:12:15.581171  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:15.581184  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:15.581256  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:15.581407  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:15.581420  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:15.581453  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:15.581522  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:15.581530  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:15.581560  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:15.581611  248387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.no-preload-640155 san=[192.168.61.168 192.168.61.168 localhost 127.0.0.1 minikube no-preload-640155]
	I1031 00:12:15.693832  248387 provision.go:172] copyRemoteCerts
	I1031 00:12:15.693906  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:15.693934  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.696811  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697210  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.697258  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697471  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.697683  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.697870  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.698054  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:15.781207  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:15.803665  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:15.826369  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:12:15.849259  248387 provision.go:86] duration metric: configureAuth took 275.127597ms
	I1031 00:12:15.849292  248387 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:15.849476  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:15.849565  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.852413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.852804  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.852848  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.853027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.853227  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853440  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853549  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.853724  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.854104  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.854132  248387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:16.147033  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:16.147078  248387 machine.go:91] provisioned docker machine in 825.808812ms
	I1031 00:12:16.147094  248387 start.go:300] post-start starting for "no-preload-640155" (driver="kvm2")
	I1031 00:12:16.147110  248387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:16.147138  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.147515  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:16.147545  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.150321  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150755  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.150798  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.151155  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.151335  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.151493  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.237897  248387 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:16.242343  248387 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:16.242367  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:16.242440  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:16.242526  248387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:16.242636  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:16.250454  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:16.273390  248387 start.go:303] post-start completed in 126.280341ms
	I1031 00:12:16.273411  248387 fix.go:56] fixHost completed within 22.891678533s
	I1031 00:12:16.273433  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.276291  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276598  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.276630  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276761  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.276989  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277270  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277434  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.277621  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:16.277984  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:16.277998  248387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:16.389581  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711136.336935137
	
	I1031 00:12:16.389607  248387 fix.go:206] guest clock: 1698711136.336935137
	I1031 00:12:16.389621  248387 fix.go:219] Guest: 2023-10-31 00:12:16.336935137 +0000 UTC Remote: 2023-10-31 00:12:16.273414732 +0000 UTC m=+271.294357841 (delta=63.520405ms)
	I1031 00:12:16.389652  248387 fix.go:190] guest clock delta is within tolerance: 63.520405ms
	I1031 00:12:16.389659  248387 start.go:83] releasing machines lock for "no-preload-640155", held for 23.007957251s
	I1031 00:12:16.389694  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.390027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:16.392988  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393466  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.393493  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393639  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394137  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394306  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394401  248387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:16.394449  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.394583  248387 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:16.394619  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.397387  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397690  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397757  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.397785  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397927  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398140  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398174  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.398206  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.398296  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398503  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.398616  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398784  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398936  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.520353  248387 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:16.526647  248387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:16.673048  248387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:16.679657  248387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:16.679738  248387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:16.699616  248387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:16.699643  248387 start.go:472] detecting cgroup driver to use...
	I1031 00:12:16.699706  248387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:16.717466  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:16.729231  248387 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:16.729300  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:16.741665  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:16.754175  248387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:16.855649  248387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:16.990153  248387 docker.go:214] disabling docker service ...
	I1031 00:12:16.990239  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:17.004614  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:17.017251  248387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:17.143006  248387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:17.257321  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:17.271045  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:17.288903  248387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:17.289001  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.298419  248387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:17.298516  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.308045  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.317176  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.327039  248387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:17.337269  248387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:17.345814  248387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:17.345886  248387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:17.359110  248387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:17.369376  248387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:17.480359  248387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:17.658006  248387 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:17.658099  248387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:17.663296  248387 start.go:540] Will wait 60s for crictl version
	I1031 00:12:17.663467  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:17.667483  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:17.709866  248387 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:17.709956  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.757817  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.812918  248387 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:17.814541  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:17.818008  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818445  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:17.818482  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818745  248387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:17.822914  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:17.837885  248387 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:17.837941  248387 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:17.874977  248387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:17.875010  248387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:12:17.875097  248387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.875104  248387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.875130  248387 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.875163  248387 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1031 00:12:17.875181  248387 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.875233  248387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.875297  248387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.875306  248387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876689  248387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.876731  248387 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.876696  248387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.876697  248387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.876695  248387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.876704  248387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1031 00:12:18.053090  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.059240  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1031 00:12:18.059239  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.065016  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.069953  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.071229  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.140026  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.149728  248387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1031 00:12:18.149778  248387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.149835  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.172611  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.238794  248387 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1031 00:12:18.238851  248387 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.238913  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331173  248387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1031 00:12:18.331228  248387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.331279  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331278  248387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1031 00:12:18.331370  248387 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1031 00:12:18.331380  248387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.331401  248387 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.331425  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331441  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331463  248387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1031 00:12:18.331503  248387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.331542  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.331584  248387 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1031 00:12:18.331632  248387 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.331665  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331545  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331591  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.348470  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.348506  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.348570  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.348619  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.484280  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.484369  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1031 00:12:18.484436  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1031 00:12:18.484501  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:18.484532  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.513117  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1031 00:12:18.513211  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1031 00:12:18.513238  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:18.513264  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1031 00:12:18.513307  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:18.513347  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:18.513392  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1031 00:12:18.513515  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:18.541278  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1031 00:12:18.541307  248387 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541340  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1031 00:12:18.541348  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1031 00:12:18.541370  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541416  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1031 00:12:18.541466  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:18.541493  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1031 00:12:18.541547  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1031 00:12:18.541549  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1031 00:12:17.727796  248718 main.go:141] libmachine: (embed-certs-078843) Waiting to get IP...
	I1031 00:12:17.728716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:17.729132  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:17.729165  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:17.729087  249483 retry.go:31] will retry after 294.663443ms: waiting for machine to come up
	I1031 00:12:18.025671  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.026112  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.026145  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.026058  249483 retry.go:31] will retry after 377.887631ms: waiting for machine to come up
	I1031 00:12:18.405434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.405878  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.405961  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.405857  249483 retry.go:31] will retry after 459.989463ms: waiting for machine to come up
	I1031 00:12:18.867094  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.867658  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.867693  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.867590  249483 retry.go:31] will retry after 552.876869ms: waiting for machine to come up
	I1031 00:12:19.422232  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.422678  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.422711  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.422642  249483 retry.go:31] will retry after 574.514705ms: waiting for machine to come up
	I1031 00:12:19.998587  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.999158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.999195  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.999071  249483 retry.go:31] will retry after 903.246228ms: waiting for machine to come up
	I1031 00:12:20.904654  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:20.905083  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:20.905118  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:20.905028  249483 retry.go:31] will retry after 1.161301577s: waiting for machine to come up
	I1031 00:12:22.067416  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:22.067874  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:22.067906  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:22.067843  249483 retry.go:31] will retry after 1.350619049s: waiting for machine to come up
	I1031 00:12:23.419771  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:23.420313  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:23.420343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:23.420276  249483 retry.go:31] will retry after 1.783701579s: waiting for machine to come up
	I1031 00:12:25.206301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:25.206880  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:25.206909  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:25.206820  249483 retry.go:31] will retry after 2.304762715s: waiting for machine to come up
	I1031 00:12:25.834889  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.293473845s)
	I1031 00:12:25.834930  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1031 00:12:25.834949  248387 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (7.293455157s)
	I1031 00:12:25.834967  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:25.834986  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1031 00:12:25.835039  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:28.718454  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.883305744s)
	I1031 00:12:28.718498  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1031 00:12:28.718536  248387 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:28.718602  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:27.513250  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:27.513691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:27.513726  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:27.513617  249483 retry.go:31] will retry after 2.77005827s: waiting for machine to come up
	I1031 00:12:30.287716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:30.288125  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:30.288181  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:30.288095  249483 retry.go:31] will retry after 2.359494113s: waiting for machine to come up
	I1031 00:12:30.082206  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.363538098s)
	I1031 00:12:30.082241  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1031 00:12:30.082284  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:30.082378  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:32.754830  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.672412397s)
	I1031 00:12:32.754865  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1031 00:12:32.754922  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:32.755008  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:34.104402  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.3493522s)
	I1031 00:12:34.104443  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1031 00:12:34.104484  248387 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:34.104528  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:36.966451  249055 start.go:369] acquired machines lock for "default-k8s-diff-port-892233" in 2m37.695455763s
	I1031 00:12:36.966568  249055 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:36.966579  249055 fix.go:54] fixHost starting: 
	I1031 00:12:36.966927  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:36.966965  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:36.985392  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I1031 00:12:36.985889  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:36.986473  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:12:36.986501  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:36.986870  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:36.987100  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:36.987295  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:12:36.989416  249055 fix.go:102] recreateIfNeeded on default-k8s-diff-port-892233: state=Stopped err=<nil>
	I1031 00:12:36.989470  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	W1031 00:12:36.989641  249055 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:36.991746  249055 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-892233" ...
	I1031 00:12:32.648970  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:32.649516  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:32.649563  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:32.649477  249483 retry.go:31] will retry after 2.827972253s: waiting for machine to come up
	I1031 00:12:35.479127  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479655  248718 main.go:141] libmachine: (embed-certs-078843) Found IP for machine: 192.168.50.2
	I1031 00:12:35.479691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has current primary IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479703  248718 main.go:141] libmachine: (embed-certs-078843) Reserving static IP address...
	I1031 00:12:35.480200  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.480259  248718 main.go:141] libmachine: (embed-certs-078843) DBG | skip adding static IP to network mk-embed-certs-078843 - found existing host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"}
	I1031 00:12:35.480299  248718 main.go:141] libmachine: (embed-certs-078843) Reserved static IP address: 192.168.50.2
	I1031 00:12:35.480319  248718 main.go:141] libmachine: (embed-certs-078843) Waiting for SSH to be available...
	I1031 00:12:35.480334  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Getting to WaitForSSH function...
	I1031 00:12:35.482640  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483140  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.483177  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH client type: external
	I1031 00:12:35.483373  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa (-rw-------)
	I1031 00:12:35.483409  248718 main.go:141] libmachine: (embed-certs-078843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:35.483434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | About to run SSH command:
	I1031 00:12:35.483453  248718 main.go:141] libmachine: (embed-certs-078843) DBG | exit 0
	I1031 00:12:35.573283  248718 main.go:141] libmachine: (embed-certs-078843) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:35.573731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetConfigRaw
	I1031 00:12:35.574538  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.577369  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.577820  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.577856  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.578175  248718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/config.json ...
	I1031 00:12:35.578461  248718 machine.go:88] provisioning docker machine ...
	I1031 00:12:35.578486  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:35.578719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.578919  248718 buildroot.go:166] provisioning hostname "embed-certs-078843"
	I1031 00:12:35.578946  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.579137  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.581632  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582041  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.582075  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582185  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.582376  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582556  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582694  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.582864  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.583247  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.583268  248718 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-078843 && echo "embed-certs-078843" | sudo tee /etc/hostname
	I1031 00:12:35.717684  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-078843
	
	I1031 00:12:35.717719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.720882  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721264  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.721299  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721514  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.721732  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.721908  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.722057  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.722318  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.722757  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.722777  248718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-078843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-078843/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-078843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:35.865568  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:35.865626  248718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:35.865667  248718 buildroot.go:174] setting up certificates
	I1031 00:12:35.865682  248718 provision.go:83] configureAuth start
	I1031 00:12:35.865696  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.866070  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.869149  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869571  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.869610  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.872260  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872618  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.872665  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872855  248718 provision.go:138] copyHostCerts
	I1031 00:12:35.872978  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:35.873000  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:35.873069  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:35.873192  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:35.873203  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:35.873234  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:35.873316  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:35.873327  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:35.873352  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:35.873426  248718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.embed-certs-078843 san=[192.168.50.2 192.168.50.2 localhost 127.0.0.1 minikube embed-certs-078843]
	I1031 00:12:36.016430  248718 provision.go:172] copyRemoteCerts
	I1031 00:12:36.016506  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:36.016553  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.019662  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020054  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.020088  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020286  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.020505  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.020658  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.020843  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.111793  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:36.140569  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:36.179708  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:12:36.203348  248718 provision.go:86] duration metric: configureAuth took 337.646698ms
	I1031 00:12:36.203385  248718 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:36.203690  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:36.203835  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.207444  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.207883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.207923  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.208236  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.208498  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208690  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208912  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.209163  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.209521  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.209547  248718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:36.711502  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:36.711535  248718 machine.go:91] provisioned docker machine in 1.133056882s
	I1031 00:12:36.711550  248718 start.go:300] post-start starting for "embed-certs-078843" (driver="kvm2")
	I1031 00:12:36.711563  248718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:36.711587  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.711984  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:36.712027  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.714954  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715374  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.715408  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715610  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.715815  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.716019  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.716192  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.803613  248718 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:36.808855  248718 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:36.808888  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:36.808973  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:36.809100  248718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:36.809240  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:36.818339  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:36.845738  248718 start.go:303] post-start completed in 134.172265ms
	I1031 00:12:36.845765  248718 fix.go:56] fixHost completed within 20.4559017s
	I1031 00:12:36.845788  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.848249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848592  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.848621  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848861  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.849120  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849307  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849462  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.849659  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.850033  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.850047  248718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:36.966267  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711156.912809532
	
	I1031 00:12:36.966293  248718 fix.go:206] guest clock: 1698711156.912809532
	I1031 00:12:36.966303  248718 fix.go:219] Guest: 2023-10-31 00:12:36.912809532 +0000 UTC Remote: 2023-10-31 00:12:36.845768911 +0000 UTC m=+239.388163644 (delta=67.040621ms)
	I1031 00:12:36.966329  248718 fix.go:190] guest clock delta is within tolerance: 67.040621ms
	I1031 00:12:36.966341  248718 start.go:83] releasing machines lock for "embed-certs-078843", held for 20.576516085s
	I1031 00:12:36.966380  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.967388  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:36.970301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970734  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.970766  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970934  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971468  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971683  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971781  248718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:36.971832  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.971921  248718 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:36.971951  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.974873  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975244  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975323  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975420  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975692  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975718  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975759  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975901  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975959  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976068  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976221  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976279  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976358  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.977011  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:37.095751  248718 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:37.101600  248718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:37.244676  248718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:37.253623  248718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:37.253702  248718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:37.272872  248718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:37.272897  248718 start.go:472] detecting cgroup driver to use...
	I1031 00:12:37.272992  248718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:37.290899  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:37.306570  248718 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:37.306633  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:37.321827  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:37.336787  248718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:37.451589  248718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:37.571290  248718 docker.go:214] disabling docker service ...
	I1031 00:12:37.571375  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:37.587764  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:37.600627  248718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:37.733539  248718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:37.850154  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:37.865463  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:37.883661  248718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:37.883728  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.892717  248718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:37.892783  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.901944  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.911061  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.920094  248718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:37.929520  248718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:37.937333  248718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:37.937404  248718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:37.949591  248718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:37.960061  248718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:38.076354  248718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:38.250618  248718 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:38.250688  248718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:38.255979  248718 start.go:540] Will wait 60s for crictl version
	I1031 00:12:38.256036  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:12:38.259822  248718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:38.299812  248718 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:38.299981  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.343088  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.397252  248718 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:36.993369  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Start
	I1031 00:12:36.993641  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring networks are active...
	I1031 00:12:36.994545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network default is active
	I1031 00:12:36.994911  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network mk-default-k8s-diff-port-892233 is active
	I1031 00:12:36.995448  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Getting domain xml...
	I1031 00:12:36.996378  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Creating domain...
	I1031 00:12:38.342502  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting to get IP...
	I1031 00:12:38.343505  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344038  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.344004  249635 retry.go:31] will retry after 206.530958ms: waiting for machine to come up
	I1031 00:12:38.552789  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553109  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.553059  249635 retry.go:31] will retry after 272.962928ms: waiting for machine to come up
	I1031 00:12:38.827741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828288  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.828242  249635 retry.go:31] will retry after 411.85264ms: waiting for machine to come up
	I1031 00:12:35.048294  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1031 00:12:35.048344  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:35.048404  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:36.902739  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.854307965s)
	I1031 00:12:36.902771  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1031 00:12:36.902803  248387 cache_images.go:123] Successfully loaded all cached images
	I1031 00:12:36.902810  248387 cache_images.go:92] LoadImages completed in 19.027785915s
	I1031 00:12:36.902926  248387 ssh_runner.go:195] Run: crio config
	I1031 00:12:36.961891  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:36.961922  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:36.961950  248387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:36.961992  248387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.168 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-640155 NodeName:no-preload-640155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:36.962203  248387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-640155"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:36.962312  248387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-640155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:36.962389  248387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:36.973945  248387 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:36.974026  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:36.987534  248387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1031 00:12:37.006510  248387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:37.025092  248387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1031 00:12:37.045090  248387 ssh_runner.go:195] Run: grep 192.168.61.168	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:37.048822  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:37.061985  248387 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155 for IP: 192.168.61.168
	I1031 00:12:37.062026  248387 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:37.062243  248387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:37.062310  248387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:37.062410  248387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.key
	I1031 00:12:37.062508  248387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key.96e3443b
	I1031 00:12:37.062570  248387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key
	I1031 00:12:37.062707  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:37.062750  248387 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:37.062767  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:37.062832  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:37.062877  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:37.062923  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:37.062987  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:37.063745  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:37.090011  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:37.119401  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:37.148361  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:12:37.173730  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:37.197769  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:37.221625  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:37.244497  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:37.274559  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:37.300372  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:37.332082  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:37.361826  248387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:37.380561  248387 ssh_runner.go:195] Run: openssl version
	I1031 00:12:37.386185  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:37.396710  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401896  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401983  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.407778  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:37.418091  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:37.427985  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432581  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432649  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.438103  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:37.447792  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:37.457689  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462421  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462495  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.468482  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:37.478565  248387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:37.483264  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:37.491175  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:37.498212  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:37.504019  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:37.509730  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:37.516218  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:37.523364  248387 kubeadm.go:404] StartCluster: {Name:no-preload-640155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:37.523465  248387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:37.523522  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:37.576223  248387 cri.go:89] found id: ""
	I1031 00:12:37.576314  248387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:37.586094  248387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:37.586133  248387 kubeadm.go:636] restartCluster start
	I1031 00:12:37.586217  248387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:37.595614  248387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.596791  248387 kubeconfig.go:92] found "no-preload-640155" server: "https://192.168.61.168:8443"
	I1031 00:12:37.600710  248387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:37.610066  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.610137  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.620501  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.620528  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.620578  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.630477  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.131205  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.131335  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.144627  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.631491  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.631587  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.647034  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.131616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.131749  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.148723  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.631171  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.631273  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.645807  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.398862  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:38.401804  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:38.402193  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402475  248718 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:38.407041  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:38.421147  248718 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:38.421228  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:38.461162  248718 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:38.461240  248718 ssh_runner.go:195] Run: which lz4
	I1031 00:12:38.465401  248718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:12:38.469796  248718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:12:38.469833  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:12:40.419642  248718 crio.go:444] Took 1.954260 seconds to copy over tarball
	I1031 00:12:40.419721  248718 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:12:39.241872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242407  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242465  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.242347  249635 retry.go:31] will retry after 371.774477ms: waiting for machine to come up
	I1031 00:12:39.616171  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616708  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616747  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.616671  249635 retry.go:31] will retry after 487.120901ms: waiting for machine to come up
	I1031 00:12:40.105492  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106116  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106151  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.106066  249635 retry.go:31] will retry after 767.19349ms: waiting for machine to come up
	I1031 00:12:40.875432  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.875932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.876009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.875892  249635 retry.go:31] will retry after 976.411998ms: waiting for machine to come up
	I1031 00:12:41.854227  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854759  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854794  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:41.854691  249635 retry.go:31] will retry after 1.041793781s: waiting for machine to come up
	I1031 00:12:42.898223  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898628  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898658  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:42.898577  249635 retry.go:31] will retry after 1.163252223s: waiting for machine to come up
	I1031 00:12:44.064217  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064593  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064626  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:44.064543  249635 retry.go:31] will retry after 1.879015473s: waiting for machine to come up
	I1031 00:12:40.131216  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.131331  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.146846  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:40.630673  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.630747  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.642955  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.131275  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.131410  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.144530  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.631108  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.631219  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.645873  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.131506  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.131641  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.147504  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.630664  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.630769  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.645755  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.131375  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.131503  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.143357  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.631616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.631714  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.647203  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.130693  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.130791  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.143566  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.630736  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.630816  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.642486  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.535831  248718 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.116078442s)
	I1031 00:12:43.535864  248718 crio.go:451] Took 3.116189 seconds to extract the tarball
	I1031 00:12:43.535877  248718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:12:43.579902  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:43.635701  248718 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:12:43.635724  248718 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:12:43.635796  248718 ssh_runner.go:195] Run: crio config
	I1031 00:12:43.714916  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:12:43.714939  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:43.714958  248718 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:43.714976  248718 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-078843 NodeName:embed-certs-078843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:43.715146  248718 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-078843"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:43.715232  248718 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-078843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:43.715295  248718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:43.726847  248718 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:43.726938  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:43.738352  248718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1031 00:12:43.756439  248718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:43.773955  248718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1031 00:12:43.793790  248718 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:43.798155  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:43.811602  248718 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843 for IP: 192.168.50.2
	I1031 00:12:43.811649  248718 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:43.811819  248718 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:43.811877  248718 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:43.811963  248718 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/client.key
	I1031 00:12:43.812051  248718 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key.e10f976c
	I1031 00:12:43.812117  248718 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key
	I1031 00:12:43.812261  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:43.812301  248718 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:43.812317  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:43.812359  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:43.812395  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:43.812430  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:43.812491  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:43.813192  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:43.841097  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:43.867995  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:43.892834  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:12:43.917649  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:43.942299  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:43.971154  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:43.995032  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:44.022277  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:44.047549  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:44.071370  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:44.095933  248718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:44.113479  248718 ssh_runner.go:195] Run: openssl version
	I1031 00:12:44.119266  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:44.133710  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140098  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140180  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.146416  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:44.159207  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:44.171618  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178288  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178375  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.186339  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:44.200864  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:44.212513  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217549  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217616  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.225170  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:44.239600  248718 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:44.244470  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:44.252637  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:44.260635  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:44.269017  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:44.277210  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:44.285394  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:44.293419  248718 kubeadm.go:404] StartCluster: {Name:embed-certs-078843 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:44.293507  248718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:44.293620  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:44.339212  248718 cri.go:89] found id: ""
	I1031 00:12:44.339302  248718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:44.350219  248718 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:44.350249  248718 kubeadm.go:636] restartCluster start
	I1031 00:12:44.350315  248718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:44.360185  248718 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.361826  248718 kubeconfig.go:92] found "embed-certs-078843" server: "https://192.168.50.2:8443"
	I1031 00:12:44.365579  248718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:44.376923  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.377021  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.390684  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.390708  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.390768  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.404614  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.905332  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.905451  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.918162  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.405760  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.405845  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.419071  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.905669  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.905770  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.922243  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.404757  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.404870  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.419662  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.905223  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.905328  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.919993  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.405571  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.405660  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.418433  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.944837  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945386  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945422  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:45.945318  249635 retry.go:31] will retry after 1.840120385s: waiting for machine to come up
	I1031 00:12:47.787276  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787807  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:47.787751  249635 retry.go:31] will retry after 2.306470153s: waiting for machine to come up
	I1031 00:12:45.131185  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.225229  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.237425  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.630872  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.630948  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.644580  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.131199  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.131280  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.142872  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.631467  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.631545  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.648339  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.130861  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.131000  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.146189  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.610939  248387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:47.610999  248387 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:47.611016  248387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:47.611107  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:47.656888  248387 cri.go:89] found id: ""
	I1031 00:12:47.656982  248387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:47.678724  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:47.688879  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:47.688985  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697091  248387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697115  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:47.837056  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.448497  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.639877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.735406  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.824428  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:48.824521  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:48.840207  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.357050  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.857029  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:47.905449  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.905552  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.921939  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.405557  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.405656  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.417674  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.905114  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.905225  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.919218  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.404811  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.404908  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.420062  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.905655  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.905769  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.922828  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.405471  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.405578  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.423259  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.904727  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.904819  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.920673  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.405155  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.405246  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.421731  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.905024  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.905101  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.919385  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:52.404843  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.404985  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.420088  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.095827  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096365  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:50.096281  249635 retry.go:31] will retry after 3.872051375s: waiting for machine to come up
	I1031 00:12:53.970393  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970918  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970956  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:53.970839  249635 retry.go:31] will retry after 5.345847198s: waiting for machine to come up
	I1031 00:12:50.357101  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:50.857024  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.357298  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.380143  248387 api_server.go:72] duration metric: took 2.555721824s to wait for apiserver process to appear ...
	I1031 00:12:51.380180  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:51.380220  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.457683  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.457719  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:54.457733  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.509385  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.509424  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:55.010185  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.017172  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.017201  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:55.510171  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.517062  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.517114  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:56.009671  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:56.017135  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:12:56.026278  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:12:56.026307  248387 api_server.go:131] duration metric: took 4.646117858s to wait for apiserver health ...
	I1031 00:12:56.026319  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:56.026331  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:56.028208  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:12:52.904735  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.904835  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.917320  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.405426  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.405546  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.420386  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.904921  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.905039  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.917303  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:54.377921  248718 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:54.377976  248718 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:54.377991  248718 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:54.378079  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:54.418685  248718 cri.go:89] found id: ""
	I1031 00:12:54.418768  248718 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:54.436536  248718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:54.451466  248718 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:54.451534  248718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464460  248718 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464484  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:54.601286  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.468262  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.664604  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.761171  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.838690  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:55.838793  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:55.857817  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.379368  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.878782  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:57.379756  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.029552  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:12:56.078774  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:12:56.128262  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:12:56.139995  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:12:56.140025  248387 system_pods.go:61] "coredns-5dd5756b68-qbvjb" [92f771d8-381b-4e38-945f-ad5ceae72b80] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:12:56.140035  248387 system_pods.go:61] "etcd-no-preload-640155" [44fcbc32-757b-4406-97ed-88ad76ae4eee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:12:56.140042  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [b92b3dec-827f-4221-8c28-83a738186e52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:12:56.140048  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [62661788-bde2-42b9-9469-a2f2c51ee283] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:12:56.140057  248387 system_pods.go:61] "kube-proxy-rv76j" [293b1dd9-fc85-4647-91c9-874ad357d222] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:12:56.140063  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [6a11d962-b407-467e-b8a0-9a101b16e4d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:12:56.140076  248387 system_pods.go:61] "metrics-server-57f55c9bc5-nm8dj" [3924727e-2734-497d-b1b1-d8f9a0ab095a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:12:56.140090  248387 system_pods.go:61] "storage-provisioner" [f8e0a3fa-eaf1-45e1-afbc-a5b2287e7703] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:12:56.140100  248387 system_pods.go:74] duration metric: took 11.816257ms to wait for pod list to return data ...
	I1031 00:12:56.140110  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:12:56.143298  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:12:56.143327  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:12:56.143365  248387 node_conditions.go:105] duration metric: took 3.247248ms to run NodePressure ...
	I1031 00:12:56.143402  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:56.398227  248387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403101  248387 kubeadm.go:787] kubelet initialised
	I1031 00:12:56.403124  248387 kubeadm.go:788] duration metric: took 4.866042ms waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403134  248387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:12:56.408758  248387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.416185  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416218  248387 pod_ready.go:81] duration metric: took 7.431969ms waiting for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.416229  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416238  248387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.421589  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421611  248387 pod_ready.go:81] duration metric: took 5.364261ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.421619  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421624  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.427046  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427075  248387 pod_ready.go:81] duration metric: took 5.443698ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.427086  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427098  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.534169  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534224  248387 pod_ready.go:81] duration metric: took 107.102474ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.534241  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534255  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332793  248387 pod_ready.go:92] pod "kube-proxy-rv76j" in "kube-system" namespace has status "Ready":"True"
	I1031 00:12:57.332824  248387 pod_ready.go:81] duration metric: took 798.55794ms waiting for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332838  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:59.642186  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:00.818958  248084 start.go:369] acquired machines lock for "old-k8s-version-225140" in 1m2.435313483s
	I1031 00:13:00.819017  248084 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:13:00.819032  248084 fix.go:54] fixHost starting: 
	I1031 00:13:00.819456  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:00.819490  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:00.838737  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1031 00:13:00.839191  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:00.839773  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:13:00.839794  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:00.840290  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:00.840514  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:00.840697  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:13:00.843346  248084 fix.go:102] recreateIfNeeded on old-k8s-version-225140: state=Stopped err=<nil>
	I1031 00:13:00.843381  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	W1031 00:13:00.843658  248084 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:13:00.848997  248084 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-225140" ...
	I1031 00:12:59.318443  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319011  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Found IP for machine: 192.168.39.2
	I1031 00:12:59.319037  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserving static IP address...
	I1031 00:12:59.319070  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has current primary IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319522  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.319557  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserved static IP address: 192.168.39.2
	I1031 00:12:59.319595  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | skip adding static IP to network mk-default-k8s-diff-port-892233 - found existing host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"}
	I1031 00:12:59.319620  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Getting to WaitForSSH function...
	I1031 00:12:59.319653  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for SSH to be available...
	I1031 00:12:59.322357  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322780  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.322819  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322938  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH client type: external
	I1031 00:12:59.322969  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa (-rw-------)
	I1031 00:12:59.323009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:59.323029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | About to run SSH command:
	I1031 00:12:59.323064  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | exit 0
	I1031 00:12:59.421581  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:59.421963  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetConfigRaw
	I1031 00:12:59.422651  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.425540  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.425916  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.425961  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.426201  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:12:59.426454  249055 machine.go:88] provisioning docker machine ...
	I1031 00:12:59.426481  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:59.426720  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.426879  249055 buildroot.go:166] provisioning hostname "default-k8s-diff-port-892233"
	I1031 00:12:59.426898  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.427067  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.429588  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.429937  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.429975  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.430208  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.430403  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430573  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430690  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.430852  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.431368  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.431386  249055 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-892233 && echo "default-k8s-diff-port-892233" | sudo tee /etc/hostname
	I1031 00:12:59.572253  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-892233
	
	I1031 00:12:59.572299  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.575534  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.575858  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.575919  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.576140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.576366  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576592  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576766  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.576919  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.577349  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.577372  249055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-892233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-892233/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-892233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:59.714987  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:59.715020  249055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:59.715079  249055 buildroot.go:174] setting up certificates
	I1031 00:12:59.715094  249055 provision.go:83] configureAuth start
	I1031 00:12:59.715115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.715440  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.718485  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.718900  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.718932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.719039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.721488  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.721844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.721874  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.722068  249055 provision.go:138] copyHostCerts
	I1031 00:12:59.722141  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:59.722155  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:59.722227  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:59.722363  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:59.722377  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:59.722402  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:59.722528  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:59.722538  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:59.722560  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:59.722619  249055 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-892233 san=[192.168.39.2 192.168.39.2 localhost 127.0.0.1 minikube default-k8s-diff-port-892233]
	I1031 00:13:00.038821  249055 provision.go:172] copyRemoteCerts
	I1031 00:13:00.038892  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:00.038924  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.042237  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042585  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.042627  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042753  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.042976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.043252  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.043410  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.130665  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:00.158853  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1031 00:13:00.188023  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:13:00.214990  249055 provision.go:86] duration metric: configureAuth took 499.878655ms
	I1031 00:13:00.215020  249055 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:00.215284  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:00.215445  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.218339  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.218821  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.218861  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.219039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.219282  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219500  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219672  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.219873  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.220371  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.220411  249055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:00.567578  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:00.567663  249055 machine.go:91] provisioned docker machine in 1.141189726s
	I1031 00:13:00.567680  249055 start.go:300] post-start starting for "default-k8s-diff-port-892233" (driver="kvm2")
	I1031 00:13:00.567695  249055 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:00.567719  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.568094  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:00.568134  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.570983  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571434  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.571478  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571649  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.571849  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.572010  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.572173  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.660300  249055 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:00.665751  249055 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:00.665779  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:00.665853  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:00.665958  249055 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:00.666046  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:00.677668  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:00.702125  249055 start.go:303] post-start completed in 134.425173ms
	I1031 00:13:00.702165  249055 fix.go:56] fixHost completed within 23.735576451s
	I1031 00:13:00.702195  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.705554  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.705976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.706029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.706319  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.706545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706722  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.707040  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.707449  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.707470  249055 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:00.818749  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711180.762641951
	
	I1031 00:13:00.818785  249055 fix.go:206] guest clock: 1698711180.762641951
	I1031 00:13:00.818797  249055 fix.go:219] Guest: 2023-10-31 00:13:00.762641951 +0000 UTC Remote: 2023-10-31 00:13:00.70217124 +0000 UTC m=+181.580385758 (delta=60.470711ms)
	I1031 00:13:00.818850  249055 fix.go:190] guest clock delta is within tolerance: 60.470711ms
	I1031 00:13:00.818861  249055 start.go:83] releasing machines lock for "default-k8s-diff-port-892233", held for 23.852333569s
	I1031 00:13:00.818897  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.819199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:00.822674  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823152  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.823194  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823436  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824107  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824336  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824543  249055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:00.824603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.824669  249055 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:00.824698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.827622  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828092  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828149  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828176  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828377  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828420  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828477  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828558  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828638  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828817  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.828926  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.829014  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.829694  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.945937  249055 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:00.951731  249055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:01.099346  249055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:01.106701  249055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:01.106789  249055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:01.122651  249055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:01.122738  249055 start.go:472] detecting cgroup driver to use...
	I1031 00:13:01.122839  249055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:01.140968  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:01.159184  249055 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:01.159267  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:01.176636  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:01.190420  249055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:01.304327  249055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:01.446312  249055 docker.go:214] disabling docker service ...
	I1031 00:13:01.446440  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:01.462043  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:01.478402  249055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:01.618099  249055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:01.745376  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:01.758262  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:01.774927  249055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:13:01.774999  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.784376  249055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:01.784441  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.793769  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.802954  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.813429  249055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:01.822730  249055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:01.832032  249055 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:01.832103  249055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:01.845005  249055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:01.855358  249055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:01.997815  249055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:02.229016  249055 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:02.229090  249055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:02.233980  249055 start.go:540] Will wait 60s for crictl version
	I1031 00:13:02.234044  249055 ssh_runner.go:195] Run: which crictl
	I1031 00:13:02.237901  249055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:02.280450  249055 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:02.280562  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.326608  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.381010  249055 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:57.879480  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.378990  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.401245  248718 api_server.go:72] duration metric: took 2.5625596s to wait for apiserver process to appear ...
	I1031 00:12:58.401294  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:58.401317  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.483261  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.483293  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:01.483309  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.586135  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.586172  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:02.086932  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.095676  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.095714  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:02.586339  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.599335  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.599376  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:03.087312  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:03.095444  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:13:03.107809  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:03.107842  248718 api_server.go:131] duration metric: took 4.706538937s to wait for apiserver health ...
	I1031 00:13:03.107855  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:13:03.107864  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:03.110057  248718 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:02.382546  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:02.386646  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387022  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:02.387068  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387291  249055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:02.393394  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:02.408630  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:13:02.408723  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:02.461303  249055 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:13:02.461388  249055 ssh_runner.go:195] Run: which lz4
	I1031 00:13:02.466160  249055 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:02.472133  249055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:02.472175  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:13:01.647436  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.653247  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.111616  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:03.142561  248718 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:03.210454  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:03.229202  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:03.229253  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:03.229269  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:03.229278  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:03.229289  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:03.229302  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:03.229321  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:03.229339  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:03.229353  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:03.229369  248718 system_pods.go:74] duration metric: took 18.888134ms to wait for pod list to return data ...
	I1031 00:13:03.229379  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:03.269761  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:03.269808  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:03.269821  248718 node_conditions.go:105] duration metric: took 40.435389ms to run NodePressure ...
	I1031 00:13:03.269843  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:03.828792  248718 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840423  248718 kubeadm.go:787] kubelet initialised
	I1031 00:13:03.840449  248718 kubeadm.go:788] duration metric: took 11.631934ms waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840461  248718 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:03.856214  248718 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.885090  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885128  248718 pod_ready.go:81] duration metric: took 28.821802ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.885141  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885169  248718 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.903365  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903468  248718 pod_ready.go:81] duration metric: took 18.286782ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.903494  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903516  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.918470  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918511  248718 pod_ready.go:81] duration metric: took 14.954407ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.918536  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918548  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.933999  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934040  248718 pod_ready.go:81] duration metric: took 15.480835ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.934057  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934068  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.237338  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237374  248718 pod_ready.go:81] duration metric: took 303.296061ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.237389  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237398  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.634179  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634222  248718 pod_ready.go:81] duration metric: took 396.814691ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.634238  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634253  248718 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.035746  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035785  248718 pod_ready.go:81] duration metric: took 401.520697ms waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:05.035801  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035816  248718 pod_ready.go:38] duration metric: took 1.195339888s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:05.035852  248718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:13:05.053467  248718 ops.go:34] apiserver oom_adj: -16
	I1031 00:13:05.053499  248718 kubeadm.go:640] restartCluster took 20.703241237s
	I1031 00:13:05.053510  248718 kubeadm.go:406] StartCluster complete in 20.760104259s
	I1031 00:13:05.053534  248718 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.053649  248718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:13:05.056586  248718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.056927  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:13:05.057035  248718 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:13:05.057123  248718 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-078843"
	I1031 00:13:05.057141  248718 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-078843"
	W1031 00:13:05.057149  248718 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:13:05.057204  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:05.057234  248718 addons.go:69] Setting default-storageclass=true in profile "embed-certs-078843"
	I1031 00:13:05.057211  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.057248  248718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-078843"
	I1031 00:13:05.057647  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057682  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057706  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057743  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057816  248718 addons.go:69] Setting metrics-server=true in profile "embed-certs-078843"
	I1031 00:13:05.057835  248718 addons.go:231] Setting addon metrics-server=true in "embed-certs-078843"
	W1031 00:13:05.057846  248718 addons.go:240] addon metrics-server should already be in state true
	I1031 00:13:05.057940  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.058407  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.058492  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.077590  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I1031 00:13:05.077948  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I1031 00:13:05.078081  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078347  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078769  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.078785  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079028  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.079054  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079408  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085132  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085145  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I1031 00:13:05.085597  248718 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-078843" context rescaled to 1 replicas
	I1031 00:13:05.085640  248718 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:13:05.088029  248718 out.go:177] * Verifying Kubernetes components...
	I1031 00:13:05.085726  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.085922  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.086067  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.089646  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:13:05.089718  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.090571  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.090592  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.091096  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.091945  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.092003  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.095067  248718 addons.go:231] Setting addon default-storageclass=true in "embed-certs-078843"
	W1031 00:13:05.095093  248718 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:13:05.095131  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.095551  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.095608  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.111102  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1031 00:13:05.111739  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.112393  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.112413  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.112797  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.112983  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.114423  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1031 00:13:05.114993  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.115615  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.115634  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.115848  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.116042  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.118503  248718 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:13:05.116288  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.120126  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:13:05.120149  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:13:05.120184  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.120637  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1031 00:13:05.121136  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.121582  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.121601  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.122054  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.122163  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.122536  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.122576  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.124417  248718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:00.852003  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Start
	I1031 00:13:00.853038  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring networks are active...
	I1031 00:13:00.853268  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network default is active
	I1031 00:13:00.853774  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network mk-old-k8s-version-225140 is active
	I1031 00:13:00.854290  248084 main.go:141] libmachine: (old-k8s-version-225140) Getting domain xml...
	I1031 00:13:00.855089  248084 main.go:141] libmachine: (old-k8s-version-225140) Creating domain...
	I1031 00:13:02.250983  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting to get IP...
	I1031 00:13:02.251883  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.252351  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.252421  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.252327  249826 retry.go:31] will retry after 242.989359ms: waiting for machine to come up
	I1031 00:13:02.497099  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.497647  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.497671  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.497581  249826 retry.go:31] will retry after 267.660992ms: waiting for machine to come up
	I1031 00:13:02.767445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.770812  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.770846  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.770757  249826 retry.go:31] will retry after 311.592507ms: waiting for machine to come up
	I1031 00:13:03.085650  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.086233  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.086262  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.086139  249826 retry.go:31] will retry after 594.222148ms: waiting for machine to come up
	I1031 00:13:03.681721  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.682255  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.682286  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.682147  249826 retry.go:31] will retry after 758.043103ms: waiting for machine to come up
	I1031 00:13:04.442274  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:04.443048  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:04.443078  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:04.442997  249826 retry.go:31] will retry after 887.518169ms: waiting for machine to come up
	I1031 00:13:05.332541  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:05.333184  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:05.333212  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:05.333129  249826 retry.go:31] will retry after 851.434462ms: waiting for machine to come up
	I1031 00:13:05.125889  248718 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.125912  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:13:05.125931  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.124466  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.126004  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.126025  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.125276  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.126198  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.126338  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.126414  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.131827  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.131844  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.131883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.131916  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.132049  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.132274  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.132420  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.144729  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I1031 00:13:05.145178  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.145775  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.145795  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.146202  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.146381  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.149644  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.150317  248718 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.150332  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:13:05.150350  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.153417  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.153915  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.153956  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.154082  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.154266  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.154606  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.154731  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.279166  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:13:05.279209  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:13:05.314989  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.318765  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.337844  248718 node_ready.go:35] waiting up to 6m0s for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:05.338209  248718 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 00:13:05.343889  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:13:05.343913  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:13:05.391973  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:05.392002  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:13:05.442745  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.503163864s)
	I1031 00:13:06.822030  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822047  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.506945748s)
	I1031 00:13:06.822097  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822123  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822539  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822568  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822594  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822620  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822641  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822654  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822665  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822689  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822702  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822711  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.823128  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823187  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823196  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.823249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823286  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823305  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.838726  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.838749  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.839036  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.839101  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.839124  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.863966  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.421170822s)
	I1031 00:13:06.864085  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864105  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.864472  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.864499  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.864511  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864520  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.865117  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.865133  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.865136  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.865144  248718 addons.go:467] Verifying addon metrics-server=true in "embed-certs-078843"
	I1031 00:13:06.868351  248718 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:13:06.869950  248718 addons.go:502] enable addons completed in 1.812918702s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:13:07.438581  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.402138  249055 crio.go:444] Took 1.936056 seconds to copy over tarball
	I1031 00:13:04.402221  249055 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:07.956805  249055 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.554540356s)
	I1031 00:13:07.956841  249055 crio.go:451] Took 3.554667 seconds to extract the tarball
	I1031 00:13:07.956854  249055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:08.017763  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:08.072921  249055 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:13:08.072982  249055 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:13:08.073063  249055 ssh_runner.go:195] Run: crio config
	I1031 00:13:08.131013  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:08.131045  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:08.131070  249055 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:08.131099  249055 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-892233 NodeName:default-k8s-diff-port-892233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:13:08.131362  249055 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-892233"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:08.131583  249055 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-892233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1031 00:13:08.131658  249055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:13:08.140884  249055 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:08.140973  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:08.149405  249055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I1031 00:13:08.166006  249055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:08.182874  249055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1031 00:13:08.200304  249055 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:08.203993  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:08.217645  249055 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233 for IP: 192.168.39.2
	I1031 00:13:08.217692  249055 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:08.217873  249055 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:08.217924  249055 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:08.218015  249055 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.key
	I1031 00:13:08.308243  249055 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key.dd3b77ed
	I1031 00:13:08.308354  249055 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key
	I1031 00:13:08.308540  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:08.308606  249055 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:08.308626  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:08.308652  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:08.308678  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:08.308701  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:08.308743  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:08.309489  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:08.339601  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:08.365873  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:08.393028  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:13:08.418983  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:08.445555  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:08.471234  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:08.496657  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:08.522698  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:08.546933  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:08.570645  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:08.596096  249055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:08.615431  249055 ssh_runner.go:195] Run: openssl version
	I1031 00:13:08.621901  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:08.633316  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638479  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638546  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.644750  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:08.656306  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:08.669978  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.675964  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.676033  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.682433  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:08.694215  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:08.706255  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713046  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713147  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.720902  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:08.732062  249055 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:08.737112  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:08.745040  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:08.753046  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:08.759410  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:08.765847  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:08.772651  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:08.779086  249055 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:08.779224  249055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:08.779292  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:08.832024  249055 cri.go:89] found id: ""
	I1031 00:13:08.832096  249055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:08.842618  249055 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:08.842641  249055 kubeadm.go:636] restartCluster start
	I1031 00:13:08.842716  249055 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:08.852209  249055 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.853480  249055 kubeconfig.go:92] found "default-k8s-diff-port-892233" server: "https://192.168.39.2:8444"
	I1031 00:13:08.855965  249055 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:08.865555  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.865617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.877258  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.877285  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.877332  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.887847  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:05.643929  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:05.643958  248387 pod_ready.go:81] duration metric: took 8.31111047s waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.643971  248387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:07.946810  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:06.186224  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:06.186916  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:06.186948  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:06.186867  249826 retry.go:31] will retry after 964.405003ms: waiting for machine to come up
	I1031 00:13:07.153455  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:07.153973  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:07.154006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:07.153917  249826 retry.go:31] will retry after 1.515980724s: waiting for machine to come up
	I1031 00:13:08.671700  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:08.672189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:08.672219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:08.672117  249826 retry.go:31] will retry after 2.254841495s: waiting for machine to come up
	I1031 00:13:09.658372  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:11.938230  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:12.439097  248718 node_ready.go:49] node "embed-certs-078843" has status "Ready":"True"
	I1031 00:13:12.439129  248718 node_ready.go:38] duration metric: took 7.101255254s waiting for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:12.439147  248718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:12.447673  248718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.469967  248718 pod_ready.go:92] pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.470002  248718 pod_ready.go:81] duration metric: took 22.292329ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.470017  248718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482061  248718 pod_ready.go:92] pod "etcd-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.482092  248718 pod_ready.go:81] duration metric: took 12.066806ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482106  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489019  248718 pod_ready.go:92] pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.489052  248718 pod_ready.go:81] duration metric: took 6.936171ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489066  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500686  248718 pod_ready.go:92] pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.500712  248718 pod_ready.go:81] duration metric: took 11.637946ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500722  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:09.388669  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.388776  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.400708  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:09.888027  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.888146  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.900678  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.388004  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.388114  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.403685  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.888198  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.888314  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.900608  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.388239  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.388363  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.404992  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.888425  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.888541  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.900436  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.388293  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.388418  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.404621  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.888037  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.888156  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.900860  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.388276  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.388371  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.400841  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.888124  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.888238  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.903041  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.168791  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:12.169662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.669047  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:10.928893  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:10.929414  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:10.929445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:10.929369  249826 retry.go:31] will retry after 2.792980456s: waiting for machine to come up
	I1031 00:13:13.724006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:13.724430  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:13.724469  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:13.724356  249826 retry.go:31] will retry after 2.555956413s: waiting for machine to come up
	I1031 00:13:12.838631  248718 pod_ready.go:92] pod "kube-proxy-287dq" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.838658  248718 pod_ready.go:81] duration metric: took 337.929955ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.838668  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239513  248718 pod_ready.go:92] pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:13.239541  248718 pod_ready.go:81] duration metric: took 400.86714ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239552  248718 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:15.546507  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.388661  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.388736  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.402388  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:14.888855  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.888965  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.903137  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.388757  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.388868  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.404412  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.888848  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.888984  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.902181  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.388790  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.388913  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.402283  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.888892  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.889035  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.900677  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.388842  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.388983  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.401399  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.888981  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.889099  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.901474  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.387997  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:18.388083  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:18.399745  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.866186  249055 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:18.866263  249055 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:18.866282  249055 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:18.866352  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:18.906125  249055 cri.go:89] found id: ""
	I1031 00:13:18.906214  249055 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:18.921555  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:18.930111  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:18.930193  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938516  249055 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938545  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:19.070700  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:17.167517  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:19.170710  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:16.282473  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:16.282944  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:16.282975  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:16.282900  249826 retry.go:31] will retry after 2.811414756s: waiting for machine to come up
	I1031 00:13:19.096338  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:19.096738  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:19.096760  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:19.096714  249826 retry.go:31] will retry after 3.844203493s: waiting for machine to come up
	I1031 00:13:17.548558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.047074  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.047691  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.139806  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069066882s)
	I1031 00:13:20.139847  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.337823  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.417915  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.499750  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:20.499831  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:20.515735  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.029420  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.529636  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.029757  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.529034  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.029479  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.055542  249055 api_server.go:72] duration metric: took 2.555800185s to wait for apiserver process to appear ...
	I1031 00:13:23.055573  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:23.055591  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:21.667545  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:24.167560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.943000  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.943492  248084 main.go:141] libmachine: (old-k8s-version-225140) Found IP for machine: 192.168.72.65
	I1031 00:13:22.943521  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserving static IP address...
	I1031 00:13:22.943540  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has current primary IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.944080  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.944120  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | skip adding static IP to network mk-old-k8s-version-225140 - found existing host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"}
	I1031 00:13:22.944139  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserved static IP address: 192.168.72.65
	I1031 00:13:22.944160  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Getting to WaitForSSH function...
	I1031 00:13:22.944168  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting for SSH to be available...
	I1031 00:13:22.946799  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.947222  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947416  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH client type: external
	I1031 00:13:22.947448  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa (-rw-------)
	I1031 00:13:22.947508  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:13:22.947534  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | About to run SSH command:
	I1031 00:13:22.947581  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | exit 0
	I1031 00:13:23.045850  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | SSH cmd err, output: <nil>: 
	I1031 00:13:23.046239  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetConfigRaw
	I1031 00:13:23.046996  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.050061  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050464  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.050496  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050789  248084 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/config.json ...
	I1031 00:13:23.051046  248084 machine.go:88] provisioning docker machine ...
	I1031 00:13:23.051070  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:23.051289  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051484  248084 buildroot.go:166] provisioning hostname "old-k8s-version-225140"
	I1031 00:13:23.051511  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051731  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.054157  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054603  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.054636  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054784  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.055085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055291  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055503  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.055718  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.056178  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.056203  248084 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-225140 && echo "old-k8s-version-225140" | sudo tee /etc/hostname
	I1031 00:13:23.184296  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-225140
	
	I1031 00:13:23.184356  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.187270  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187720  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.187761  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187895  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.188085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188228  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188340  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.188565  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.189104  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.189135  248084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-225140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-225140/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-225140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:13:23.315792  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:13:23.315829  248084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:13:23.315893  248084 buildroot.go:174] setting up certificates
	I1031 00:13:23.315906  248084 provision.go:83] configureAuth start
	I1031 00:13:23.315921  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.316224  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.319690  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320111  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.320143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320315  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.322897  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323334  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.323362  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323720  248084 provision.go:138] copyHostCerts
	I1031 00:13:23.323803  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:13:23.323820  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:13:23.323895  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:13:23.324025  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:13:23.324043  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:13:23.324080  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:13:23.324257  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:13:23.324272  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:13:23.324313  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:13:23.324415  248084 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-225140 san=[192.168.72.65 192.168.72.65 localhost 127.0.0.1 minikube old-k8s-version-225140]
	I1031 00:13:23.580836  248084 provision.go:172] copyRemoteCerts
	I1031 00:13:23.580905  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:23.580929  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.584088  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584527  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.584576  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584872  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.585115  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.585290  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.585440  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:23.680241  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1031 00:13:23.706003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:13:23.730993  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:23.760873  248084 provision.go:86] duration metric: configureAuth took 444.934236ms
	I1031 00:13:23.760909  248084 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:23.761208  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:13:23.761370  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.764798  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.765273  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765411  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.765646  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.765868  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.766036  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.766256  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.766762  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.766796  248084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:24.109914  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:24.109946  248084 machine.go:91] provisioned docker machine in 1.058882555s
	I1031 00:13:24.109958  248084 start.go:300] post-start starting for "old-k8s-version-225140" (driver="kvm2")
	I1031 00:13:24.109972  248084 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:24.109994  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.110392  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:24.110456  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.113825  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114298  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.114335  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114587  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.114814  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.114989  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.115148  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.206997  248084 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:24.211439  248084 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:24.211467  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:24.211551  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:24.211635  248084 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:24.211722  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:24.219976  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:24.246337  248084 start.go:303] post-start completed in 136.360652ms
	I1031 00:13:24.246366  248084 fix.go:56] fixHost completed within 23.427336969s
	I1031 00:13:24.246389  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.249547  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.249876  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.249919  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.250099  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.250300  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250603  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250815  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.251022  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:24.251387  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:24.251413  248084 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:24.366477  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711204.302770779
	
	I1031 00:13:24.366499  248084 fix.go:206] guest clock: 1698711204.302770779
	I1031 00:13:24.366507  248084 fix.go:219] Guest: 2023-10-31 00:13:24.302770779 +0000 UTC Remote: 2023-10-31 00:13:24.246369619 +0000 UTC m=+368.452785688 (delta=56.40116ms)
	I1031 00:13:24.366558  248084 fix.go:190] guest clock delta is within tolerance: 56.40116ms
	I1031 00:13:24.366570  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 23.547580429s
	I1031 00:13:24.366599  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.366871  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:24.369640  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.369985  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.370032  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.370155  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370695  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370910  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370996  248084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:24.371044  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.371205  248084 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:24.371233  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.373962  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374315  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374349  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374379  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374621  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.374759  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374796  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.374822  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374952  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375018  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.375140  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.375139  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.375278  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375383  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.490387  248084 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:24.497758  248084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:24.645967  248084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:24.652716  248084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:24.652795  248084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:24.668415  248084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:24.668446  248084 start.go:472] detecting cgroup driver to use...
	I1031 00:13:24.668513  248084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:24.683255  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:24.697242  248084 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:24.697295  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:24.710554  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:24.725562  248084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:24.847447  248084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:24.982382  248084 docker.go:214] disabling docker service ...
	I1031 00:13:24.982477  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:24.998270  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:25.011136  248084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:25.129421  248084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:25.258387  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:25.271528  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:25.291702  248084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1031 00:13:25.291788  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.301762  248084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:25.301826  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.311900  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.322111  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.331429  248084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:25.344907  248084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:25.354397  248084 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:25.354463  248084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:25.367335  248084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:25.376415  248084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:25.493551  248084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:25.677504  248084 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:25.677648  248084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:25.683882  248084 start.go:540] Will wait 60s for crictl version
	I1031 00:13:25.683952  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:25.687748  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:25.729230  248084 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:25.729316  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.782619  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.832400  248084 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1031 00:13:25.833898  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:25.836924  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837347  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:25.837372  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837666  248084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:25.841940  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:24.051460  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.554325  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.499116  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.499157  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:26.499172  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:26.509898  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.509929  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:27.010543  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.024054  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.024104  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:27.510303  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.518621  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.518658  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:28.010147  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:28.017834  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:13:28.027903  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:28.028005  249055 api_server.go:131] duration metric: took 4.972421145s to wait for apiserver health ...
	I1031 00:13:28.028033  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:28.028070  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:28.030427  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:28.032020  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:28.042889  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:28.084357  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:28.114368  249055 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:28.114416  249055 system_pods.go:61] "coredns-5dd5756b68-6sbs7" [4cf52749-359c-42b7-a985-d2cdc3f20700] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:28.114430  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [75c06d7d-877d-4df8-9805-0ea50aec938f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:28.114440  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [6eb1d4f8-0594-4992-962c-383062853ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:28.114460  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [8b5e8ab9-34fe-4337-95d1-554adbd23505] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:28.114470  249055 system_pods.go:61] "kube-proxy-jn2j8" [23f4d9d7-61a0-43d9-a815-a4ce10a568e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:28.114479  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [dcb7e68d-4e3d-4e46-935a-1372309ad89c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:28.114488  249055 system_pods.go:61] "metrics-server-57f55c9bc5-7klqw" [3f832e2c-81b4-431e-b1a2-987057fdae0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:28.114502  249055 system_pods.go:61] "storage-provisioner" [b912cf02-280b-47e0-8e72-fd22566a40f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:28.114515  249055 system_pods.go:74] duration metric: took 30.127265ms to wait for pod list to return data ...
	I1031 00:13:28.114534  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:28.126920  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:28.126971  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:28.127018  249055 node_conditions.go:105] duration metric: took 12.476154ms to run NodePressure ...
	I1031 00:13:28.127048  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:28.402286  249055 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407352  249055 kubeadm.go:787] kubelet initialised
	I1031 00:13:28.407384  249055 kubeadm.go:788] duration metric: took 5.069821ms waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407397  249055 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:28.413100  249055 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:26.174532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:28.667350  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:25.856078  248084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1031 00:13:25.856136  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:25.913612  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:25.913733  248084 ssh_runner.go:195] Run: which lz4
	I1031 00:13:25.918632  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:25.923981  248084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:25.924014  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1031 00:13:27.712494  248084 crio.go:444] Took 1.793896 seconds to copy over tarball
	I1031 00:13:27.712615  248084 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:29.050835  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.549536  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.457173  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.255838  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.667667  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.167250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.207204  248084 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.494544747s)
	I1031 00:13:31.207238  248084 crio.go:451] Took 3.494710 seconds to extract the tarball
	I1031 00:13:31.207250  248084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:31.253648  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:31.312599  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:31.312624  248084 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:13:31.312719  248084 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.312753  248084 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.312763  248084 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.312776  248084 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1031 00:13:31.312705  248084 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.313005  248084 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.313122  248084 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.312926  248084 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314301  248084 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314408  248084 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.314826  248084 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.314863  248084 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.314835  248084 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.314877  248084 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.314888  248084 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.314904  248084 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1031 00:13:31.492117  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.493373  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.506179  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.506237  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1031 00:13:31.510547  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.515827  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.524137  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.614442  248084 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1031 00:13:31.614494  248084 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.614544  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.622661  248084 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1031 00:13:31.622718  248084 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.622770  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.630473  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.674058  248084 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1031 00:13:31.674111  248084 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.674161  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.707251  248084 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1031 00:13:31.707293  248084 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1031 00:13:31.707337  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1031 00:13:31.719006  248084 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.719008  248084 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1031 00:13:31.719056  248084 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.719072  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719084  248084 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.719111  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719119  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.719139  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719176  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.866787  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.866815  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1031 00:13:31.866818  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.866883  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1031 00:13:31.866887  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.866936  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1031 00:13:31.867046  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.993265  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1031 00:13:31.993505  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1031 00:13:31.993999  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1031 00:13:31.994045  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1031 00:13:31.994063  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1031 00:13:31.994123  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999020  248084 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1031 00:13:31.999034  248084 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999068  248084 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1031 00:13:33.460498  248084 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461402246s)
	I1031 00:13:33.460530  248084 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1031 00:13:33.460582  248084 cache_images.go:92] LoadImages completed in 2.147945804s
	W1031 00:13:33.460661  248084 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1031 00:13:33.460749  248084 ssh_runner.go:195] Run: crio config
	I1031 00:13:33.528812  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:33.528838  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:33.528865  248084 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:33.528895  248084 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.65 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-225140 NodeName:old-k8s-version-225140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1031 00:13:33.529103  248084 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-225140"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-225140
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.65:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:33.529205  248084 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-225140 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:13:33.529276  248084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1031 00:13:33.539328  248084 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:33.539424  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:33.551543  248084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1031 00:13:33.569095  248084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:33.586561  248084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1031 00:13:33.605084  248084 ssh_runner.go:195] Run: grep 192.168.72.65	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:33.609322  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:33.623527  248084 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140 for IP: 192.168.72.65
	I1031 00:13:33.623556  248084 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:33.623768  248084 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:33.623817  248084 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:33.623919  248084 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.key
	I1031 00:13:33.624000  248084 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key.fa85241c
	I1031 00:13:33.624074  248084 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key
	I1031 00:13:33.624223  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:33.624267  248084 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:33.624285  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:33.624333  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:33.624377  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:33.624409  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:33.624480  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:33.625311  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:33.648457  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:33.673383  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:33.701679  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:13:33.725823  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:33.748912  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:33.777397  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:33.803003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:33.827749  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:33.850011  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:33.871722  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:33.894663  248084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:33.912130  248084 ssh_runner.go:195] Run: openssl version
	I1031 00:13:33.918010  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:33.928381  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933548  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933605  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.939344  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:33.950844  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:33.962585  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968178  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968244  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.975606  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:33.986565  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:33.998188  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.003940  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.004012  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.010088  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:34.022223  248084 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:34.028537  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:34.036319  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:34.043481  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:34.051269  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:34.058129  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:34.065473  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:34.072663  248084 kubeadm.go:404] StartCluster: {Name:old-k8s-version-225140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:34.072781  248084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:34.072830  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:34.121758  248084 cri.go:89] found id: ""
	I1031 00:13:34.121848  248084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:34.135357  248084 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:34.135392  248084 kubeadm.go:636] restartCluster start
	I1031 00:13:34.135469  248084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:34.145173  248084 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.146905  248084 kubeconfig.go:92] found "old-k8s-version-225140" server: "https://192.168.72.65:8443"
	I1031 00:13:34.150660  248084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:34.163037  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.163119  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.184414  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.184441  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.184586  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.197787  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.698120  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.698246  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.710874  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.198312  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.198384  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.210933  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.698108  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.698210  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.710184  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:33.551354  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.048781  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.442171  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.941322  249055 pod_ready.go:92] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:36.941344  249055 pod_ready.go:81] duration metric: took 8.528221711s waiting for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:36.941353  249055 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:38.959679  249055 pod_ready.go:102] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.168250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:37.666699  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.198699  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.198787  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.211005  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:36.698612  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.698705  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.712106  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.198674  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.198779  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.211665  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.698160  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.698258  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.709798  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.198294  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.198410  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.210400  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.697965  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.698058  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.710188  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.198306  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.198435  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.210213  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.698867  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.698944  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.709958  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.198113  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.198217  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.209265  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.698424  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.698494  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.715194  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.548167  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.047378  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:39.959598  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.959625  249055 pod_ready.go:81] duration metric: took 3.018261782s waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.959638  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965182  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.965204  249055 pod_ready.go:81] duration metric: took 5.558563ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965218  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970258  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.970283  249055 pod_ready.go:81] duration metric: took 5.058027ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970293  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975183  249055 pod_ready.go:92] pod "kube-proxy-jn2j8" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.975202  249055 pod_ready.go:81] duration metric: took 4.903272ms waiting for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975209  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137875  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:40.137907  249055 pod_ready.go:81] duration metric: took 162.69035ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137921  249055 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:42.452793  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:40.167385  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:42.666396  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.198534  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.198640  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.210412  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:41.698420  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.698526  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.710324  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.198572  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.198649  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.210399  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.697932  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.698010  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.711010  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.198096  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.198182  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.209468  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.698864  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.698998  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.710735  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:44.163493  248084 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:44.163545  248084 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:44.163560  248084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:44.163621  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:44.204352  248084 cri.go:89] found id: ""
	I1031 00:13:44.204444  248084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:44.219641  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:44.228342  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:44.228420  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237058  248084 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237081  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:44.369926  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.077715  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.306025  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.399572  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.537955  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:45.538046  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:45.554284  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:43.549424  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.052253  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:44.947118  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.954020  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:45.167622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:47.669895  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.073056  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:46.572408  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.072392  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.098617  248084 api_server.go:72] duration metric: took 1.560662194s to wait for apiserver process to appear ...
	I1031 00:13:47.098650  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:47.098673  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:48.547476  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:50.547537  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:49.446620  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:51.946346  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:53.949089  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.098997  248084 api_server.go:269] stopped: https://192.168.72.65:8443/healthz: Get "https://192.168.72.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1031 00:13:52.099073  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:52.709441  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:52.709490  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:53.210178  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.216374  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.216403  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:53.709935  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.717326  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.717361  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:54.209883  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:54.215985  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:13:54.224088  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:13:54.224115  248084 api_server.go:131] duration metric: took 7.125456227s to wait for apiserver health ...
	I1031 00:13:54.224127  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:54.224135  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:54.226152  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:50.168563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.669900  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.227723  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:54.239709  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:54.261391  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:54.273728  248084 system_pods.go:59] 7 kube-system pods found
	I1031 00:13:54.273761  248084 system_pods.go:61] "coredns-5644d7b6d9-2s6pc" [c77d23a4-28d0-4bbf-bb28-baff23fc4987] Running
	I1031 00:13:54.273775  248084 system_pods.go:61] "etcd-old-k8s-version-225140" [dcc629ce-f107-4d14-b69b-20228b00b7c5] Running
	I1031 00:13:54.273783  248084 system_pods.go:61] "kube-apiserver-old-k8s-version-225140" [38fd683e-51fa-40f0-a3c6-afdf57e14132] Running
	I1031 00:13:54.273791  248084 system_pods.go:61] "kube-controller-manager-old-k8s-version-225140" [29b1b9cb-1819-497e-b0f9-c008b0ac6e26] Running
	I1031 00:13:54.273803  248084 system_pods.go:61] "kube-proxy-fxz8t" [57ccd26e-cbcf-4ed3-adbe-778fd8bcf27c] Running
	I1031 00:13:54.273811  248084 system_pods.go:61] "kube-scheduler-old-k8s-version-225140" [d8d4d75c-25f8-4485-853c-8fa75105c6e2] Running
	I1031 00:13:54.273818  248084 system_pods.go:61] "storage-provisioner" [8fc76055-6a96-4884-8f91-b2d3f598bc88] Running
	I1031 00:13:54.273826  248084 system_pods.go:74] duration metric: took 12.417629ms to wait for pod list to return data ...
	I1031 00:13:54.273840  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:54.279056  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:54.279082  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:54.279094  248084 node_conditions.go:105] duration metric: took 5.248504ms to run NodePressure ...
	I1031 00:13:54.279111  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:54.594257  248084 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:54.600279  248084 retry.go:31] will retry after 287.663167ms: kubelet not initialised
	I1031 00:13:54.899142  248084 retry.go:31] will retry after 297.826066ms: kubelet not initialised
	I1031 00:13:55.205347  248084 retry.go:31] will retry after 797.709551ms: kubelet not initialised
	I1031 00:13:52.548142  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.548667  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.047942  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.446395  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:58.946167  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:55.167909  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.668179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:59.668339  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.009099  248084 retry.go:31] will retry after 571.448668ms: kubelet not initialised
	I1031 00:13:56.593388  248084 retry.go:31] will retry after 1.82270665s: kubelet not initialised
	I1031 00:13:58.421789  248084 retry.go:31] will retry after 1.094040234s: kubelet not initialised
	I1031 00:13:59.522021  248084 retry.go:31] will retry after 3.716569913s: kubelet not initialised
	I1031 00:13:59.549278  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.551103  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.446913  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.947203  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.668422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.668478  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.244381  248084 retry.go:31] will retry after 4.104024564s: kubelet not initialised
	I1031 00:14:04.048498  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.548070  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.447864  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.945886  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.166653  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.167008  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:07.354371  248084 retry.go:31] will retry after 9.18347873s: kubelet not initialised
	I1031 00:14:09.047421  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.048479  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.448689  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.948268  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:10.667348  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:12.667812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.052934  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.547846  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.446625  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:18.447872  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.167259  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:17.666670  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:19.667251  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.544997  248084 retry.go:31] will retry after 8.29261189s: kubelet not initialised
	I1031 00:14:17.550692  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.045758  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:22.047516  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.946805  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:23.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:21.667436  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.167210  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.843011  248084 retry.go:31] will retry after 15.309414425s: kubelet not initialised
	I1031 00:14:24.048197  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.546847  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:25.946796  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:27.950212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.167443  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.168482  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.548116  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:31.047187  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.446164  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.451487  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.666762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.667234  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:33.049216  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.545964  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:34.946961  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:36.947212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:38.949437  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.167751  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:37.668981  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:39.669233  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.157618  248084 kubeadm.go:787] kubelet initialised
	I1031 00:14:40.157647  248084 kubeadm.go:788] duration metric: took 45.563360213s waiting for restarted kubelet to initialise ...
	I1031 00:14:40.157660  248084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:14:40.163372  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169776  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.169798  248084 pod_ready.go:81] duration metric: took 6.398827ms waiting for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169806  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175023  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.175047  248084 pod_ready.go:81] duration metric: took 5.233827ms waiting for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175058  248084 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179248  248084 pod_ready.go:92] pod "etcd-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.179269  248084 pod_ready.go:81] duration metric: took 4.202967ms waiting for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179279  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183579  248084 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.183593  248084 pod_ready.go:81] duration metric: took 4.308627ms waiting for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183604  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558275  248084 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.558308  248084 pod_ready.go:81] duration metric: took 374.694908ms waiting for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558321  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:37.547289  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.047586  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:41.446752  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:43.447874  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.166207  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:44.167277  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.958069  248084 pod_ready.go:92] pod "kube-proxy-fxz8t" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.958099  248084 pod_ready.go:81] duration metric: took 399.768399ms waiting for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.958112  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358244  248084 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:41.358274  248084 pod_ready.go:81] duration metric: took 400.15381ms waiting for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358284  248084 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:43.666594  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.666948  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.547950  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.047306  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.946510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.946663  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:46.167952  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.667854  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.166448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.167022  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.547211  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:49.548100  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.548509  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.446801  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.447233  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.168676  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.667170  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.666608  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.667583  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.550528  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:56.050177  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.947677  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.447082  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:55.669616  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.170640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.165612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.168165  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.548441  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.047296  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.447626  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.947292  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:00.669772  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.665706  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.166609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.546708  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.547092  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.447672  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.449541  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:08.948333  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.667422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.669173  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.666325  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.165998  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.547133  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.547568  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.551676  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.946673  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:10.168209  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:12.666973  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.668147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.166824  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.665410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.046068  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.047803  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:15.946975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.445704  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:17.167480  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:19.668157  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.165876  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.166620  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.666455  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.549666  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:21.046823  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.447212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.947109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.167144  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.168041  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.667076  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.167164  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:23.047419  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.049728  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.947312  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.449246  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:26.669861  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.168519  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.666465  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.166123  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.547889  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.046604  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.048045  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.948497  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.446948  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:31.670479  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.167604  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.668009  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:35.165749  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.547533  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.048031  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.945337  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.947811  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.168180  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:38.170343  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.168053  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.665709  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.552108  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.047262  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.451699  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.946296  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:40.667428  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.668235  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.666624  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.166672  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.047729  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.549442  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.447109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.448250  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:48.947017  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:45.167138  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:47.666886  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.667907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.669428  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.166194  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.047526  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.049047  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:50.947410  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.446734  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:52.167771  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:54.167875  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.666228  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.667295  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.052036  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.547767  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.946776  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.446825  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.668562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:59.168110  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.167716  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.665487  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.668666  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.047770  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.047908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.048356  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.946590  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.947001  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:01.667160  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.167375  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:03.165171  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.166289  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.049788  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.547020  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.446511  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.449772  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.667622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:08.667665  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.166410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.166536  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.049966  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.547967  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.947975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:12.447789  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.168645  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667838  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.665962  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667117  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:15.667752  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.047716  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.048052  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.947264  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.947386  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.167045  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.668483  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:17.669275  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.167079  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.548369  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.548635  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:19.448662  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.947615  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.167164  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.167506  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:22.666820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.166614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.046392  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.548954  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:24.446814  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:26.945792  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:28.947133  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.167732  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.168662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.171362  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.169221  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.667206  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.550807  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:30.048391  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.448249  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.946336  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.667185  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.667628  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.165207  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:34.166237  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.546558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.046558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:37.047654  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.946896  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.449959  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.668366  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.168509  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:36.166529  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.666448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:39.552154  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.046335  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.946962  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.446383  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.666758  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.668031  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:41.168643  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.170216  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.666959  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:44.046908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:46.548312  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.947573  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.947914  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.166562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667578  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667903  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.166574  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.046763  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:51.047566  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.948510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.446760  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.168646  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.667122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.668132  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.168815  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.667713  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:53.546751  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:56.048217  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.947315  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.447727  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.169330  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.666819  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.166002  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.168109  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:58.548212  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.047033  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.448330  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.946970  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.667755  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.666457  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167186  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:03.546842  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.547488  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.445743  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:06.446624  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.451015  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.644115  248387 pod_ready.go:81] duration metric: took 4m0.000125657s waiting for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:05.644148  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:05.644168  248387 pod_ready.go:38] duration metric: took 4m9.241022532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:05.644198  248387 kubeadm.go:640] restartCluster took 4m28.058055798s
	W1031 00:17:05.644570  248387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:05.644685  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:06.168910  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.666612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.047998  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.547186  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.946940  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.455539  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:11.168678  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.667122  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.046682  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.240656  248718 pod_ready.go:81] duration metric: took 4m0.001083426s waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:13.240702  248718 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:13.240712  248718 pod_ready.go:38] duration metric: took 4m0.801552437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:13.240732  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:17:13.240766  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:13.240930  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:13.307072  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.307099  248718 cri.go:89] found id: ""
	I1031 00:17:13.307108  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:13.307180  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.312997  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:13.313067  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:13.364439  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:13.364474  248718 cri.go:89] found id: ""
	I1031 00:17:13.364485  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:13.364561  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.370120  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:13.370186  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:13.413937  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.413972  248718 cri.go:89] found id: ""
	I1031 00:17:13.413983  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:13.414051  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.420586  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:13.420669  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:13.476980  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:13.477008  248718 cri.go:89] found id: ""
	I1031 00:17:13.477028  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:13.477100  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.482874  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:13.482957  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:13.532196  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.532232  248718 cri.go:89] found id: ""
	I1031 00:17:13.532244  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:13.532314  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.539868  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:13.540017  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:13.595189  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:13.595218  248718 cri.go:89] found id: ""
	I1031 00:17:13.595231  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:13.595305  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.601429  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:13.601496  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:13.641957  248718 cri.go:89] found id: ""
	I1031 00:17:13.641984  248718 logs.go:284] 0 containers: []
	W1031 00:17:13.641992  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:13.641998  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:13.642053  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:13.683163  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.683193  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:13.683200  248718 cri.go:89] found id: ""
	I1031 00:17:13.683209  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:13.683266  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.689222  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.693814  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:13.693839  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:13.710167  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:13.710188  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.754241  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:13.754273  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.800473  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:13.800508  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.857072  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:13.857101  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.901072  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:13.901102  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:14.390850  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:14.390894  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:14.446107  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:14.446141  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:14.495337  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:14.495368  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:14.535558  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:14.535591  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:14.589637  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:14.589676  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:14.650509  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:14.650559  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:14.816331  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:14.816362  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.363336  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:17:17.378105  248718 api_server.go:72] duration metric: took 4m12.292425365s to wait for apiserver process to appear ...
	I1031 00:17:17.378131  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:17:17.378171  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:17.378234  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:17.424054  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:17.424082  248718 cri.go:89] found id: ""
	I1031 00:17:17.424091  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:17.424152  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.428185  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:17.428246  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:17.465132  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:17.465157  248718 cri.go:89] found id: ""
	I1031 00:17:17.465167  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:17.465219  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.469315  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:17.469392  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:17.504119  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:17.504140  248718 cri.go:89] found id: ""
	I1031 00:17:17.504151  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:17.504199  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:15.946464  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:17.949398  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:19.822838  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.178119551s)
	I1031 00:17:19.822927  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:19.838182  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:19.847738  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:19.857883  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:19.857939  248387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:19.911372  248387 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:19.911432  248387 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:20.091412  248387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:20.091582  248387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:20.091703  248387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:20.351519  248387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:16.166533  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:18.668258  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:20.353310  248387 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:20.353500  248387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:20.353598  248387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:20.353712  248387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:20.353809  248387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:20.353933  248387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:20.354050  248387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:20.354132  248387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:20.354241  248387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:20.354353  248387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:20.354596  248387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:20.355193  248387 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:20.355332  248387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:21.009329  248387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:21.145431  248387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:21.231013  248387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:21.384423  248387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:21.385066  248387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:21.387895  248387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:17.508240  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:17.510213  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:17.548666  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:17.548692  248718 cri.go:89] found id: ""
	I1031 00:17:17.548702  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:17.548768  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.552963  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:17.553029  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:17.593690  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:17.593728  248718 cri.go:89] found id: ""
	I1031 00:17:17.593739  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:17.593808  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.598269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:17.598325  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:17.637723  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:17.637750  248718 cri.go:89] found id: ""
	I1031 00:17:17.637761  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:17.637826  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.642006  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:17.642055  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:17.686659  248718 cri.go:89] found id: ""
	I1031 00:17:17.686687  248718 logs.go:284] 0 containers: []
	W1031 00:17:17.686695  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:17.686701  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:17.686766  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:17.732114  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:17.732147  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.732154  248718 cri.go:89] found id: ""
	I1031 00:17:17.732163  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:17.732232  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.737308  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.741981  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:17.742013  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:18.181024  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:18.181062  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:18.196483  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:18.196519  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:18.235422  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:18.235458  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:18.291366  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:18.291402  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:18.412906  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:18.412960  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:18.469631  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:18.469669  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:18.523997  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:18.524034  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:18.566490  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:18.566520  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:18.626106  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:18.626138  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:18.666341  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:18.666382  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:18.729380  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:18.729430  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:18.788148  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:18.788182  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.330782  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:17:21.338085  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:17:21.339623  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:17:21.339671  248718 api_server.go:131] duration metric: took 3.961531332s to wait for apiserver health ...
	I1031 00:17:21.339684  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:17:21.339718  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:21.339786  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:21.380659  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:21.380687  248718 cri.go:89] found id: ""
	I1031 00:17:21.380696  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:21.380760  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.385559  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:21.385626  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:21.431810  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:21.431841  248718 cri.go:89] found id: ""
	I1031 00:17:21.431851  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:21.431914  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.436489  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:21.436562  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:21.489003  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.489036  248718 cri.go:89] found id: ""
	I1031 00:17:21.489047  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:21.489109  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.493691  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:21.493765  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:21.533480  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:21.533507  248718 cri.go:89] found id: ""
	I1031 00:17:21.533518  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:21.533584  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.538269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:21.538358  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:21.589588  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:21.589621  248718 cri.go:89] found id: ""
	I1031 00:17:21.589632  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:21.589705  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.595927  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:21.596020  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:21.644705  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:21.644730  248718 cri.go:89] found id: ""
	I1031 00:17:21.644738  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:21.644797  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.649696  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:21.649762  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:21.696655  248718 cri.go:89] found id: ""
	I1031 00:17:21.696692  248718 logs.go:284] 0 containers: []
	W1031 00:17:21.696703  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:21.696711  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:21.696788  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:21.743499  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.743523  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:21.743528  248718 cri.go:89] found id: ""
	I1031 00:17:21.743535  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:21.743586  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.748625  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.753187  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:21.753223  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:21.768074  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:21.768115  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:21.913742  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:21.913782  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.966345  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:21.966394  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:22.004823  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:22.004857  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:22.059117  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:22.059147  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:22.117615  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:22.117655  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:22.160231  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:22.160275  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:20.445730  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:22.447412  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:21.390006  248387 out.go:204]   - Booting up control plane ...
	I1031 00:17:21.390170  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:21.390275  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:21.391130  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:21.408062  248387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:21.409190  248387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:21.409256  248387 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:21.565150  248387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:22.536881  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:22.536920  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:22.591993  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:22.592030  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:22.644262  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:22.644302  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:22.688848  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:22.688880  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:22.740390  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:22.740440  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:25.317640  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:17:25.317675  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.317682  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.317690  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.317696  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.317702  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.317709  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.317718  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.317728  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.317737  248718 system_pods.go:74] duration metric: took 3.978040466s to wait for pod list to return data ...
	I1031 00:17:25.317752  248718 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:17:25.320120  248718 default_sa.go:45] found service account: "default"
	I1031 00:17:25.320147  248718 default_sa.go:55] duration metric: took 2.387709ms for default service account to be created ...
	I1031 00:17:25.320156  248718 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:17:25.325979  248718 system_pods.go:86] 8 kube-system pods found
	I1031 00:17:25.326004  248718 system_pods.go:89] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.326009  248718 system_pods.go:89] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.326014  248718 system_pods.go:89] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.326018  248718 system_pods.go:89] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.326022  248718 system_pods.go:89] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.326025  248718 system_pods.go:89] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.326055  248718 system_pods.go:89] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.326079  248718 system_pods.go:89] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.326088  248718 system_pods.go:126] duration metric: took 5.92719ms to wait for k8s-apps to be running ...
	I1031 00:17:25.326097  248718 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:17:25.326148  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:25.342753  248718 system_svc.go:56] duration metric: took 16.646026ms WaitForService to wait for kubelet.
	I1031 00:17:25.342775  248718 kubeadm.go:581] duration metric: took 4m20.257105243s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:17:25.342793  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:17:25.348257  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:17:25.348315  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:17:25.348379  248718 node_conditions.go:105] duration metric: took 5.579398ms to run NodePressure ...
	I1031 00:17:25.348413  248718 start.go:228] waiting for startup goroutines ...
	I1031 00:17:25.348426  248718 start.go:233] waiting for cluster config update ...
	I1031 00:17:25.348440  248718 start.go:242] writing updated cluster config ...
	I1031 00:17:25.349022  248718 ssh_runner.go:195] Run: rm -f paused
	I1031 00:17:25.415112  248718 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:17:25.418179  248718 out.go:177] * Done! kubectl is now configured to use "embed-certs-078843" cluster and "default" namespace by default
	I1031 00:17:21.166338  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:23.666609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:24.447530  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:26.947352  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:29.570822  248387 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004974 seconds
	I1031 00:17:29.570964  248387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:17:29.587033  248387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:17:30.119470  248387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:17:30.119696  248387 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-640155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:17:30.635312  248387 kubeadm.go:322] [bootstrap-token] Using token: cwaa4b.bqwxrocs0j7ngn44
	I1031 00:17:26.166271  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:28.664576  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.664963  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.636717  248387 out.go:204]   - Configuring RBAC rules ...
	I1031 00:17:30.636873  248387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:17:30.642895  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:17:30.651729  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:17:30.655472  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:17:30.659228  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:17:30.668748  248387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:17:30.690255  248387 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:17:30.950445  248387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:17:31.051453  248387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:17:31.051475  248387 kubeadm.go:322] 
	I1031 00:17:31.051536  248387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:17:31.051583  248387 kubeadm.go:322] 
	I1031 00:17:31.051709  248387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:17:31.051728  248387 kubeadm.go:322] 
	I1031 00:17:31.051767  248387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:17:31.051843  248387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:17:31.051930  248387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:17:31.051943  248387 kubeadm.go:322] 
	I1031 00:17:31.052013  248387 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:17:31.052024  248387 kubeadm.go:322] 
	I1031 00:17:31.052104  248387 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:17:31.052130  248387 kubeadm.go:322] 
	I1031 00:17:31.052191  248387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:17:31.052280  248387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:17:31.052375  248387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:17:31.052383  248387 kubeadm.go:322] 
	I1031 00:17:31.052485  248387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:17:31.052578  248387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:17:31.052612  248387 kubeadm.go:322] 
	I1031 00:17:31.052744  248387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.052900  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:17:31.052957  248387 kubeadm.go:322] 	--control-plane 
	I1031 00:17:31.052969  248387 kubeadm.go:322] 
	I1031 00:17:31.053092  248387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:17:31.053107  248387 kubeadm.go:322] 
	I1031 00:17:31.053217  248387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.053359  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:17:31.053517  248387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:17:31.053540  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:17:31.053552  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:17:31.055477  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:17:29.447694  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.449117  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:33.947759  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.056845  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:17:31.095104  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:17:31.131198  248387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:17:31.131322  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.131337  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=no-preload-640155 minikube.k8s.io/updated_at=2023_10_31T00_17_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.581951  248387 ops.go:34] apiserver oom_adj: -16
	I1031 00:17:31.582010  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.741330  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.350182  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.850643  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.350205  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.850216  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.349583  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.666281  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.168579  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:36.449644  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:38.946898  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.350661  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:35.850301  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.349673  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.849749  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.349755  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.850628  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.350204  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.849697  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.350194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.850027  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.667083  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.166305  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.349747  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:40.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.350476  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.850214  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.350555  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.850295  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.350645  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.679529  248387 kubeadm.go:1081] duration metric: took 12.548274555s to wait for elevateKubeSystemPrivileges.
	I1031 00:17:43.679561  248387 kubeadm.go:406] StartCluster complete in 5m6.156207823s
	I1031 00:17:43.679585  248387 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.679674  248387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:17:43.682045  248387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.684483  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:17:43.684785  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:17:43.684856  248387 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:17:43.684927  248387 addons.go:69] Setting storage-provisioner=true in profile "no-preload-640155"
	I1031 00:17:43.685036  248387 addons.go:231] Setting addon storage-provisioner=true in "no-preload-640155"
	W1031 00:17:43.685063  248387 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:17:43.685159  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685323  248387 addons.go:69] Setting metrics-server=true in profile "no-preload-640155"
	I1031 00:17:43.685339  248387 addons.go:231] Setting addon metrics-server=true in "no-preload-640155"
	W1031 00:17:43.685356  248387 addons.go:240] addon metrics-server should already be in state true
	I1031 00:17:43.685395  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685653  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685706  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.685893  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685978  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.686168  248387 addons.go:69] Setting default-storageclass=true in profile "no-preload-640155"
	I1031 00:17:43.686191  248387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-640155"
	I1031 00:17:43.686545  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.686651  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.705002  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1031 00:17:43.705181  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1031 00:17:43.705556  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706410  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706515  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.706543  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.706893  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I1031 00:17:43.706968  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.707139  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.707141  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.707157  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.707503  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.708166  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.708183  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.708236  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.708752  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.708783  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.709044  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.709715  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.709762  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.711511  248387 addons.go:231] Setting addon default-storageclass=true in "no-preload-640155"
	W1031 00:17:43.711525  248387 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:17:43.711553  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.711887  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.711927  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.730687  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1031 00:17:43.731513  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.732184  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.732205  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.732737  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.733201  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.734567  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I1031 00:17:43.734708  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I1031 00:17:43.735166  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.735665  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.735687  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.736245  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.736325  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.736490  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.736559  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.737461  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.739478  248387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:17:43.737480  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.738913  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.741138  248387 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.741154  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:17:43.741176  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.742564  248387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:17:43.741663  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.744300  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:17:43.744312  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:17:43.744326  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.744413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.745065  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.745106  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.753076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753082  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753110  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753196  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753200  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753235  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753249  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753282  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753376  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753469  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753527  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753624  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.753739  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.770481  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I1031 00:17:43.770925  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.773191  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.773223  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.773636  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.773840  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.775633  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.775954  248387 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:43.775969  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:17:43.775988  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.778552  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.778797  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.778823  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.779021  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.779204  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.779386  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.779683  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.936171  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.958064  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:17:43.958098  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:17:43.967116  248387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-640155" context rescaled to 1 replicas
	I1031 00:17:43.967170  248387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:17:43.969408  248387 out.go:177] * Verifying Kubernetes components...
	I1031 00:17:40.138062  249055 pod_ready.go:81] duration metric: took 4m0.000119587s waiting for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:40.138098  249055 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:40.138122  249055 pod_ready.go:38] duration metric: took 4m11.730710605s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:40.138164  249055 kubeadm.go:640] restartCluster took 4m31.295508075s
	W1031 00:17:40.138262  249055 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:40.138297  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:43.970897  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:43.997796  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:44.038710  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:17:44.038738  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:17:44.075299  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:44.075333  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:17:44.084795  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:17:44.172770  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:42.670020  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:45.165914  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:46.365906  248387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.39492875s)
	I1031 00:17:46.365968  248387 node_ready.go:35] waiting up to 6m0s for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.365998  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.368158747s)
	I1031 00:17:46.366066  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366074  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.281185782s)
	I1031 00:17:46.366103  248387 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1031 00:17:46.366086  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366354  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.430149836s)
	I1031 00:17:46.366390  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366402  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366600  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366612  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366622  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366631  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366682  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.366732  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366742  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366751  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366761  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.368921  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.368922  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.368958  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.369248  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.369293  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.369307  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.375988  248387 node_ready.go:49] node "no-preload-640155" has status "Ready":"True"
	I1031 00:17:46.376021  248387 node_ready.go:38] duration metric: took 10.036603ms waiting for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.376036  248387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:46.401563  248387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:46.425939  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.253121961s)
	I1031 00:17:46.426019  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.426035  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427461  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427471  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427488  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427498  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.427508  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427894  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427943  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427954  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427971  248387 addons.go:467] Verifying addon metrics-server=true in "no-preload-640155"
	I1031 00:17:46.436605  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.436630  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.436927  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.436959  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.436987  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.438529  248387 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1031 00:17:46.439869  248387 addons.go:502] enable addons completed in 2.755015847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1031 00:17:48.527903  248387 pod_ready.go:92] pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.527939  248387 pod_ready.go:81] duration metric: took 2.126335033s waiting for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.527954  248387 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544043  248387 pod_ready.go:92] pod "etcd-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.544070  248387 pod_ready.go:81] duration metric: took 16.106665ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544085  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552043  248387 pod_ready.go:92] pod "kube-apiserver-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.552075  248387 pod_ready.go:81] duration metric: took 7.981099ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552092  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563073  248387 pod_ready.go:92] pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.563112  248387 pod_ready.go:81] duration metric: took 11.009619ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563128  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771051  248387 pod_ready.go:92] pod "kube-proxy-pkjsl" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.771080  248387 pod_ready.go:81] duration metric: took 207.944354ms waiting for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771090  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170323  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:49.170354  248387 pod_ready.go:81] duration metric: took 399.25516ms waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170369  248387 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:47.166417  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:49.665614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:51.479213  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.979583  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:54.802281  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.663950968s)
	I1031 00:17:54.802401  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:54.818228  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:54.829802  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:54.841203  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:54.841254  249055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:54.900359  249055 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:54.900453  249055 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:55.068403  249055 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:55.068563  249055 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:55.068676  249055 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:55.316737  249055 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:51.665839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.666626  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:55.319016  249055 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:55.319172  249055 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:55.319275  249055 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:55.319395  249055 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:55.319481  249055 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:55.319603  249055 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:55.320419  249055 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:55.320814  249055 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:55.321700  249055 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:55.322211  249055 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:55.322708  249055 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:55.323252  249055 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:55.323344  249055 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:55.388450  249055 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:55.461692  249055 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:55.807861  249055 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:55.963028  249055 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:55.963510  249055 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:55.966001  249055 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:55.967951  249055 out.go:204]   - Booting up control plane ...
	I1031 00:17:55.968125  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:55.968238  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:55.968343  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:55.989357  249055 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:55.990439  249055 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:55.990548  249055 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:56.126548  249055 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:56.479126  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.479232  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:56.166722  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.667319  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:00.980893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.481571  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:04.629984  249055 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502923 seconds
	I1031 00:18:04.630137  249055 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:04.643529  249055 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:05.178336  249055 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:05.178549  249055 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-892233 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:18:05.695447  249055 kubeadm.go:322] [bootstrap-token] Using token: g00nr2.87o2mnv2u0jwf81d
	I1031 00:18:01.165232  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.166303  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.664899  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.696918  249055 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:05.697075  249055 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:05.706237  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:18:05.720767  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:05.731239  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:05.736130  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:05.740949  249055 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:05.759998  249055 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:18:06.051798  249055 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:06.118986  249055 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:06.119014  249055 kubeadm.go:322] 
	I1031 00:18:06.119078  249055 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:06.119084  249055 kubeadm.go:322] 
	I1031 00:18:06.119179  249055 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:06.119190  249055 kubeadm.go:322] 
	I1031 00:18:06.119225  249055 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:06.119282  249055 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:06.119326  249055 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:06.119332  249055 kubeadm.go:322] 
	I1031 00:18:06.119376  249055 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:18:06.119382  249055 kubeadm.go:322] 
	I1031 00:18:06.119424  249055 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:18:06.119435  249055 kubeadm.go:322] 
	I1031 00:18:06.119484  249055 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:06.119551  249055 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:06.119677  249055 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:06.119703  249055 kubeadm.go:322] 
	I1031 00:18:06.119830  249055 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:18:06.119938  249055 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:06.119957  249055 kubeadm.go:322] 
	I1031 00:18:06.120024  249055 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120179  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:06.120208  249055 kubeadm.go:322] 	--control-plane 
	I1031 00:18:06.120219  249055 kubeadm.go:322] 
	I1031 00:18:06.120330  249055 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:06.120368  249055 kubeadm.go:322] 
	I1031 00:18:06.120468  249055 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120559  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:06.121091  249055 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:06.121119  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:18:06.121127  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:06.123073  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:06.124566  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:06.140064  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:06.171195  249055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:06.171343  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.171359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=default-k8s-diff-port-892233 minikube.k8s.io/updated_at=2023_10_31T00_18_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.256957  249055 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:06.637700  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.769942  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.383359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.883621  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.384017  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.883751  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:05.979125  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.979280  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.981296  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.666495  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:10.165765  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.383896  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:09.883523  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.384077  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.883546  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.383417  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.883493  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.384043  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.884000  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.383479  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.884100  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.479614  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.978890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:12.666054  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:15.163419  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.384001  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:14.884297  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.383607  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.883617  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.383591  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.884141  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.384112  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.884196  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.384156  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.883687  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:19.114222  249055 kubeadm.go:1081] duration metric: took 12.942949327s to wait for elevateKubeSystemPrivileges.
	I1031 00:18:19.114261  249055 kubeadm.go:406] StartCluster complete in 5m10.335188993s
	I1031 00:18:19.114295  249055 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.114401  249055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:18:19.116632  249055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.116971  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:18:19.117107  249055 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:18:19.117188  249055 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117202  249055 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117221  249055 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117231  249055 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:19.117239  249055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-892233"
	W1031 00:18:19.117243  249055 addons.go:240] addon metrics-server should already be in state true
	I1031 00:18:19.117265  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:18:19.117305  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117213  249055 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.117326  249055 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:18:19.117372  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117740  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117746  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117761  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117830  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.134384  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I1031 00:18:19.134426  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I1031 00:18:19.134810  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.134915  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.135437  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135461  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.135648  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135675  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.136018  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136074  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136578  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.136625  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.137167  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.137198  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.144184  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I1031 00:18:19.144763  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.145263  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.145293  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.145648  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.145852  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.152132  249055 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.152194  249055 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:18:19.152240  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.152775  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.152867  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.154334  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I1031 00:18:19.155862  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1031 00:18:19.157267  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.158677  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.158735  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.158863  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.164983  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.165014  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.165044  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166267  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166284  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.169122  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.169199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.174627  249055 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:18:19.170934  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.176219  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:18:19.177591  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:18:19.177619  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.179052  249055 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:18:19.176693  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45785
	I1031 00:18:19.178184  249055 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-892233" context rescaled to 1 replicas
	I1031 00:18:19.179171  249055 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:18:19.181526  249055 out.go:177] * Verifying Kubernetes components...
	I1031 00:18:19.182930  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:16.980163  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:18.981179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:17.165555  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.174245  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.181603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.184667  249055 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.184676  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.184683  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:18:19.184698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.179546  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.184702  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.182398  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.184914  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.185097  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.185743  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.185761  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.185827  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.186516  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.187946  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.187988  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.188014  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.188359  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.188374  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.188549  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.188757  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.189003  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.189160  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.203564  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1031 00:18:19.203935  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.204374  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.204399  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.204741  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.204994  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.207012  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.207266  249055 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.207283  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:18:19.207302  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.209950  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210314  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.210332  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210507  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.210701  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.210830  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.210962  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.423829  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:18:19.423852  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:18:19.440581  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.466961  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.511517  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:18:19.511543  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:18:19.591560  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.591588  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:18:19.628414  249055 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.628560  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:18:19.648329  249055 node_ready.go:49] node "default-k8s-diff-port-892233" has status "Ready":"True"
	I1031 00:18:19.648353  249055 node_ready.go:38] duration metric: took 19.904402ms waiting for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.648364  249055 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:19.658333  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.692147  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.904902  249055 pod_ready.go:102] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:22.104924  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.637923019s)
	I1031 00:18:22.104999  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.104997  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.664373813s)
	I1031 00:18:22.105008  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476413511s)
	I1031 00:18:22.105035  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105013  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105052  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105035  249055 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 00:18:22.105350  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105366  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105376  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105388  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105479  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Closing plugin on server side
	I1031 00:18:22.105541  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105554  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105573  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105594  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105821  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105852  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105860  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105870  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.146205  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.146231  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.146598  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.146631  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.219948  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.561551335s)
	I1031 00:18:22.220017  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220033  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220412  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220441  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220459  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220474  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220820  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220840  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220853  249055 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:22.222793  249055 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:18:22.224194  249055 addons.go:502] enable addons completed in 3.107083845s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:18:22.880805  249055 pod_ready.go:92] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:22.880840  249055 pod_ready.go:81] duration metric: took 3.18866819s waiting for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:22.880853  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912036  249055 pod_ready.go:92] pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.912066  249055 pod_ready.go:81] duration metric: took 1.031204489s waiting for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912079  249055 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918589  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.918609  249055 pod_ready.go:81] duration metric: took 6.523247ms waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918619  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925040  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.925059  249055 pod_ready.go:81] duration metric: took 6.434141ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925067  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073002  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.073029  249055 pod_ready.go:81] duration metric: took 147.953037ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073044  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.478451  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.479849  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:24.473158  249055 pod_ready.go:92] pod "kube-proxy-77gzz" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.473184  249055 pod_ready.go:81] duration metric: took 400.13282ms waiting for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.473194  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873506  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.873528  249055 pod_ready.go:81] duration metric: took 400.328112ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873538  249055 pod_ready.go:38] duration metric: took 5.225163782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:24.873558  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:18:24.873617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:18:24.890474  249055 api_server.go:72] duration metric: took 5.711236569s to wait for apiserver process to appear ...
	I1031 00:18:24.890508  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:18:24.890533  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:18:24.896826  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:18:24.898203  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:18:24.898226  249055 api_server.go:131] duration metric: took 7.708512ms to wait for apiserver health ...
	I1031 00:18:24.898234  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:18:25.076806  249055 system_pods.go:59] 9 kube-system pods found
	I1031 00:18:25.076835  249055 system_pods.go:61] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.076840  249055 system_pods.go:61] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.076845  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.076850  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.076854  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.076857  249055 system_pods.go:61] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.076861  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.076868  249055 system_pods.go:61] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.076874  249055 system_pods.go:61] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.076882  249055 system_pods.go:74] duration metric: took 178.64211ms to wait for pod list to return data ...
	I1031 00:18:25.076889  249055 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:18:25.272531  249055 default_sa.go:45] found service account: "default"
	I1031 00:18:25.272557  249055 default_sa.go:55] duration metric: took 195.662215ms for default service account to be created ...
	I1031 00:18:25.272567  249055 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:18:25.477225  249055 system_pods.go:86] 9 kube-system pods found
	I1031 00:18:25.477258  249055 system_pods.go:89] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.477266  249055 system_pods.go:89] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.477275  249055 system_pods.go:89] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.477282  249055 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.477292  249055 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.477298  249055 system_pods.go:89] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.477309  249055 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.477323  249055 system_pods.go:89] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.477333  249055 system_pods.go:89] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.477343  249055 system_pods.go:126] duration metric: took 204.769317ms to wait for k8s-apps to be running ...
	I1031 00:18:25.477356  249055 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:18:25.477416  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:25.494054  249055 system_svc.go:56] duration metric: took 16.688482ms WaitForService to wait for kubelet.
	I1031 00:18:25.494079  249055 kubeadm.go:581] duration metric: took 6.314858374s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:18:25.494097  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:18:25.673698  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:18:25.673729  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:18:25.673742  249055 node_conditions.go:105] duration metric: took 179.63938ms to run NodePressure ...
	I1031 00:18:25.673756  249055 start.go:228] waiting for startup goroutines ...
	I1031 00:18:25.673764  249055 start.go:233] waiting for cluster config update ...
	I1031 00:18:25.673778  249055 start.go:242] writing updated cluster config ...
	I1031 00:18:25.674107  249055 ssh_runner.go:195] Run: rm -f paused
	I1031 00:18:25.729477  249055 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:18:25.731433  249055 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-892233" cluster and "default" namespace by default
	I1031 00:18:21.666578  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.667065  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:25.980194  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:27.983361  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:26.166839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:28.664820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.665038  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.478938  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:32.980862  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:33.164907  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.165601  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.479491  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.979837  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.167604  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.665586  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.982368  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:44.476905  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.359122  248084 pod_ready.go:81] duration metric: took 4m0.000818862s waiting for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	E1031 00:18:41.359173  248084 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:18:41.359193  248084 pod_ready.go:38] duration metric: took 4m1.201522433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:41.359227  248084 kubeadm.go:640] restartCluster took 5m7.223824608s
	W1031 00:18:41.359305  248084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:18:41.359335  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:18:46.480820  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:48.487440  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:46.413914  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.054544075s)
	I1031 00:18:46.414001  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:46.427362  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:18:46.436557  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:18:46.444929  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:18:46.445010  248084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1031 00:18:46.659252  248084 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:50.978966  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:52.980133  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.061122  248084 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1031 00:18:59.061211  248084 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:18:59.061324  248084 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:18:59.061476  248084 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:18:59.061695  248084 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:18:59.061861  248084 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:18:59.061989  248084 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:18:59.062059  248084 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1031 00:18:59.062158  248084 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:18:59.063991  248084 out.go:204]   - Generating certificates and keys ...
	I1031 00:18:59.064091  248084 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:18:59.064178  248084 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:18:59.064261  248084 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:18:59.064320  248084 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:18:59.064400  248084 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:18:59.064478  248084 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:18:59.064590  248084 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:18:59.064687  248084 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:18:59.064777  248084 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:18:59.064884  248084 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:18:59.064967  248084 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:18:59.065056  248084 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:18:59.065123  248084 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:18:59.065199  248084 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:18:59.065284  248084 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:18:59.065375  248084 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:18:59.065483  248084 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:18:59.067362  248084 out.go:204]   - Booting up control plane ...
	I1031 00:18:59.067477  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:18:59.067584  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:18:59.067655  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:18:59.067761  248084 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:18:59.067952  248084 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:18:59.068089  248084 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004306 seconds
	I1031 00:18:59.068174  248084 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:59.068330  248084 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:59.068419  248084 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:59.068536  248084 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-225140 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1031 00:18:59.068585  248084 kubeadm.go:322] [bootstrap-token] Using token: 1g4jse.zc5opkcf3va44z15
	I1031 00:18:59.070040  248084 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:59.070142  248084 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:59.070305  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:59.070451  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:59.070569  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:59.070657  248084 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:59.070700  248084 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:59.070742  248084 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:59.070748  248084 kubeadm.go:322] 
	I1031 00:18:59.070799  248084 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:59.070809  248084 kubeadm.go:322] 
	I1031 00:18:59.070900  248084 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:59.070912  248084 kubeadm.go:322] 
	I1031 00:18:59.070933  248084 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:59.070983  248084 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:59.071030  248084 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:59.071035  248084 kubeadm.go:322] 
	I1031 00:18:59.071082  248084 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:59.071158  248084 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:59.071269  248084 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:59.071278  248084 kubeadm.go:322] 
	I1031 00:18:59.071392  248084 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1031 00:18:59.071498  248084 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:59.071509  248084 kubeadm.go:322] 
	I1031 00:18:59.071608  248084 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.071749  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:59.071783  248084 kubeadm.go:322]     --control-plane 	  
	I1031 00:18:59.071793  248084 kubeadm.go:322] 
	I1031 00:18:59.071899  248084 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:59.071912  248084 kubeadm.go:322] 
	I1031 00:18:59.072051  248084 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.072196  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:59.072228  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:18:59.072243  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:59.073949  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:55.479295  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:57.983131  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.075900  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:59.087288  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:59.112130  248084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:59.112241  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.112258  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=old-k8s-version-225140 minikube.k8s.io/updated_at=2023_10_31T00_18_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.144297  248084 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:59.352655  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.464268  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.069316  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.569382  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.481532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:02.978563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:01.069124  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:01.569535  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.069209  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.569292  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.069280  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.569469  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.069050  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.569082  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.068795  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.569625  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.479444  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:07.980592  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:09.982873  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:06.069318  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:06.569043  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.069599  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.569098  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.069690  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.569668  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.069735  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.569294  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.069080  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.569441  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.068991  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.569543  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.069495  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.568757  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.069012  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.569638  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.789009  248084 kubeadm.go:1081] duration metric: took 14.676828073s to wait for elevateKubeSystemPrivileges.
	I1031 00:19:13.789061  248084 kubeadm.go:406] StartCluster complete in 5m39.716410778s
	I1031 00:19:13.789090  248084 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.789209  248084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:19:13.791883  248084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.792204  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:19:13.792368  248084 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:19:13.792451  248084 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792457  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:19:13.792471  248084 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-225140"
	W1031 00:19:13.792480  248084 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:19:13.792485  248084 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792515  248084 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792531  248084 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:13.792534  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	W1031 00:19:13.792540  248084 addons.go:240] addon metrics-server should already be in state true
	I1031 00:19:13.792568  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.792516  248084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-225140"
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793021  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793104  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793147  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793254  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.811115  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I1031 00:19:13.811377  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I1031 00:19:13.811793  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.811913  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.812411  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812433  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812586  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812636  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812764  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.812833  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35585
	I1031 00:19:13.813035  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.813186  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.813284  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.813624  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.813649  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.813896  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.813938  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.813984  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.814742  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.814791  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.817328  248084 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-225140"
	W1031 00:19:13.817352  248084 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:19:13.817383  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.817651  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.817676  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.831410  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1031 00:19:13.832059  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.832665  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.832686  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.833071  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.833396  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.834672  248084 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-225140" context rescaled to 1 replicas
	I1031 00:19:13.834715  248084 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:19:13.837043  248084 out.go:177] * Verifying Kubernetes components...
	I1031 00:19:13.834927  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1031 00:19:13.835269  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.835504  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I1031 00:19:13.837823  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.838827  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:19:13.840427  248084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:19:13.838307  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.839305  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.842067  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.842200  248084 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:13.842220  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:19:13.842259  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.842518  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.843110  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.843159  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.843539  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.843577  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.844178  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.844488  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.846259  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.846704  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.848811  248084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:19:12.479334  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:14.484105  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:13.847143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.847192  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.850295  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.850300  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:19:13.850319  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:19:13.850341  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.850537  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.850712  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.851115  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.853651  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854192  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.854226  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854563  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.854758  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.854967  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.855112  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.862473  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I1031 00:19:13.862970  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.863496  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.863526  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.864026  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.864257  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.866270  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.866530  248084 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:13.866546  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:19:13.866565  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.870580  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.870992  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.871028  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.871142  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.871372  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.871542  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.871678  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:14.034938  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:14.040988  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:19:14.041016  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:19:14.061666  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:14.111727  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:19:14.111758  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:19:14.125610  248084 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.125707  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:19:14.165369  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:14.165397  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:19:14.193366  248084 node_ready.go:49] node "old-k8s-version-225140" has status "Ready":"True"
	I1031 00:19:14.193389  248084 node_ready.go:38] duration metric: took 67.750717ms waiting for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.193401  248084 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:14.207505  248084 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:14.276613  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:15.572065  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.537074399s)
	I1031 00:19:15.572136  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572152  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572177  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.510470973s)
	I1031 00:19:15.572219  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572238  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572336  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.446596481s)
	I1031 00:19:15.572363  248084 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1031 00:19:15.572603  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572621  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572632  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572642  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572697  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572711  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572757  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572778  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572756  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572908  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572910  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572970  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.573533  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.573554  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586186  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.586210  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.586507  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.586530  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586546  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.700772  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.424096792s)
	I1031 00:19:15.700835  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.700851  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701196  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701217  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701230  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.701242  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701531  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.701561  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701574  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701585  248084 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:15.703404  248084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:19:15.704856  248084 addons.go:502] enable addons completed in 1.91251063s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:19:16.980629  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:19.478989  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:16.278623  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:18.779192  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.978882  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.981260  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.276797  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.277531  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.277561  248084 pod_ready.go:81] duration metric: took 9.070020963s waiting for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.277575  248084 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283345  248084 pod_ready.go:92] pod "kube-proxy-v2pp4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.283367  248084 pod_ready.go:81] duration metric: took 5.78532ms waiting for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283374  248084 pod_ready.go:38] duration metric: took 9.089964646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:23.283394  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:19:23.283452  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:19:23.300275  248084 api_server.go:72] duration metric: took 9.465522842s to wait for apiserver process to appear ...
	I1031 00:19:23.300294  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:19:23.300308  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:19:23.309064  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:19:23.310485  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:19:23.310508  248084 api_server.go:131] duration metric: took 10.207384ms to wait for apiserver health ...
	I1031 00:19:23.310517  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:19:23.314181  248084 system_pods.go:59] 4 kube-system pods found
	I1031 00:19:23.314205  248084 system_pods.go:61] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.314210  248084 system_pods.go:61] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.314217  248084 system_pods.go:61] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.314224  248084 system_pods.go:61] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.314230  248084 system_pods.go:74] duration metric: took 3.706807ms to wait for pod list to return data ...
	I1031 00:19:23.314236  248084 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:19:23.316411  248084 default_sa.go:45] found service account: "default"
	I1031 00:19:23.316435  248084 default_sa.go:55] duration metric: took 2.192647ms for default service account to be created ...
	I1031 00:19:23.316443  248084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:19:23.320111  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.320137  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.320148  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.320159  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.320167  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.320190  248084 retry.go:31] will retry after 199.965979ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.524726  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.524754  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.524760  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.524766  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.524773  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.524788  248084 retry.go:31] will retry after 276.623866ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.807038  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.807066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.807072  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.807080  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.807087  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.807104  248084 retry.go:31] will retry after 316.245952ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.128239  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.128268  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.128277  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.128287  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.128297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.128326  248084 retry.go:31] will retry after 483.558456ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.616454  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.616486  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.616494  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.616505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.616514  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.616534  248084 retry.go:31] will retry after 700.807178ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:25.323617  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:25.323666  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:25.323675  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:25.323687  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:25.323697  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:25.323718  248084 retry.go:31] will retry after 768.27646ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:26.485923  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:28.978283  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:26.097257  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:26.097283  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:26.097288  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:26.097295  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:26.097302  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:26.097320  248084 retry.go:31] will retry after 1.004884505s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:27.108295  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:27.108330  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:27.108339  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:27.108350  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:27.108360  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:27.108380  248084 retry.go:31] will retry after 1.256932803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:28.369629  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:28.369668  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:28.369677  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:28.369688  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:28.369698  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:28.369722  248084 retry.go:31] will retry after 1.554545012s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:29.930268  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:29.930295  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:29.930314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:29.930322  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:29.930338  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:29.930358  248084 retry.go:31] will retry after 1.794325328s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:30.981402  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:33.478794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:31.729473  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:31.729511  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:31.729520  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:31.729531  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:31.729542  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:31.729563  248084 retry.go:31] will retry after 2.111450847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:33.846759  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:33.846787  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:33.846792  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:33.846801  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:33.846807  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:33.846824  248084 retry.go:31] will retry after 2.198886772s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:35.981890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:38.478284  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:36.050460  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:36.050491  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:36.050496  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:36.050505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:36.050512  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:36.050530  248084 retry.go:31] will retry after 3.361148685s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:39.417603  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:39.417633  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:39.417640  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:39.417651  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:39.417660  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:39.417680  248084 retry.go:31] will retry after 4.41093106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:40.978990  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.479103  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.834041  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:43.834083  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:43.834093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:43.834104  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:43.834115  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:43.834134  248084 retry.go:31] will retry after 5.294476287s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:45.482986  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:47.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.980183  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.133233  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:49.133264  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:49.133269  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:49.133276  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:49.133284  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:49.133300  248084 retry.go:31] will retry after 7.429511286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:51.980355  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:53.981222  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.480456  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:58.979640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.567247  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:56.567278  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:56.567284  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:56.567290  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:56.567297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:56.567314  248084 retry.go:31] will retry after 10.944177906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:01.477606  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:03.481220  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:05.979560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.984688  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.518274  248084 system_pods.go:86] 7 kube-system pods found
	I1031 00:20:07.518300  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:07.518306  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Pending
	I1031 00:20:07.518310  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Pending
	I1031 00:20:07.518314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:07.518318  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Pending
	I1031 00:20:07.518325  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:07.518331  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:07.518349  248084 retry.go:31] will retry after 8.381829497s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:10.485015  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:12.978647  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.479489  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:17.980834  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.906034  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:15.906066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:15.906074  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Pending
	I1031 00:20:15.906080  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:15.906087  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:15.906093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:15.906100  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:15.906109  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:15.906120  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:15.906138  248084 retry.go:31] will retry after 11.167332732s: missing components: etcd
	I1031 00:20:20.481147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:22.980858  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:24.982265  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:27.080224  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:27.080263  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:27.080272  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Running
	I1031 00:20:27.080279  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:27.080287  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:27.080294  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:27.080301  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:27.080318  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:27.080332  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:27.080343  248084 system_pods.go:126] duration metric: took 1m3.763892339s to wait for k8s-apps to be running ...
	I1031 00:20:27.080357  248084 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:20:27.080408  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:20:27.098039  248084 system_svc.go:56] duration metric: took 17.670849ms WaitForService to wait for kubelet.
	I1031 00:20:27.098075  248084 kubeadm.go:581] duration metric: took 1m13.263332949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:20:27.098105  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:20:27.101093  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:20:27.101126  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:20:27.101182  248084 node_conditions.go:105] duration metric: took 3.066191ms to run NodePressure ...
	I1031 00:20:27.101198  248084 start.go:228] waiting for startup goroutines ...
	I1031 00:20:27.101208  248084 start.go:233] waiting for cluster config update ...
	I1031 00:20:27.101222  248084 start.go:242] writing updated cluster config ...
	I1031 00:20:27.101586  248084 ssh_runner.go:195] Run: rm -f paused
	I1031 00:20:27.157211  248084 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1031 00:20:27.159327  248084 out.go:177] 
	W1031 00:20:27.160872  248084 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1031 00:20:27.163644  248084 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1031 00:20:27.165443  248084 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-225140" cluster and "default" namespace by default
	I1031 00:20:27.481582  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:29.978812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:32.478965  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:34.479052  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:36.486487  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:38.981098  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:41.478500  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:43.478933  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:45.978794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:47.978937  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:49.980825  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:52.479268  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:54.978422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:57.478476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:59.478602  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:01.478639  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:03.479969  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:05.978907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:08.478656  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:10.978877  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:12.981683  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:15.479094  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:17.978893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:20.479878  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:22.483287  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:24.978077  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:26.979122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:28.981476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:31.478577  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:33.479816  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:35.979787  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:37.981859  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:40.477762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:42.479382  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:44.479508  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:46.479851  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:48.482610  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:49.171002  248387 pod_ready.go:81] duration metric: took 4m0.000595541s waiting for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	E1031 00:21:49.171048  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:21:49.171063  248387 pod_ready.go:38] duration metric: took 4m2.795014386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:21:49.171097  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:21:49.171149  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:21:49.171248  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:21:49.226512  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.226543  248387 cri.go:89] found id: ""
	I1031 00:21:49.226555  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:21:49.226647  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.230993  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:21:49.231060  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:21:49.270646  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:49.270677  248387 cri.go:89] found id: ""
	I1031 00:21:49.270688  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:21:49.270760  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.275165  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:21:49.275225  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:21:49.317730  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:49.317757  248387 cri.go:89] found id: ""
	I1031 00:21:49.317768  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:21:49.317818  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.322362  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:21:49.322430  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:21:49.361430  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.361462  248387 cri.go:89] found id: ""
	I1031 00:21:49.361474  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:21:49.361535  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.365642  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:21:49.365713  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:21:49.409230  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:49.409258  248387 cri.go:89] found id: ""
	I1031 00:21:49.409269  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:21:49.409329  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.413540  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:21:49.413622  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:21:49.458477  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:49.458506  248387 cri.go:89] found id: ""
	I1031 00:21:49.458518  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:21:49.458586  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.462471  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:21:49.462540  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:21:49.498272  248387 cri.go:89] found id: ""
	I1031 00:21:49.498299  248387 logs.go:284] 0 containers: []
	W1031 00:21:49.498309  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:21:49.498316  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:21:49.498386  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:21:49.538677  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.538704  248387 cri.go:89] found id: ""
	I1031 00:21:49.538714  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:21:49.538776  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.544293  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:21:49.544318  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:21:49.719505  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:21:49.719542  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.770108  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:21:49.770146  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.826250  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:21:49.826289  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.864212  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:21:49.864244  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:21:50.278307  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:21:50.278348  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:21:50.332860  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:21:50.332894  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:21:50.413002  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413224  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413368  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413524  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.435703  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:21:50.435739  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:21:50.451836  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:21:50.451865  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:50.493883  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:21:50.493912  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:50.533935  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:21:50.533967  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:50.582053  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:21:50.582094  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:50.638988  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639021  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:21:50.639177  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:21:50.639191  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639201  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639213  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639219  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.639225  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639232  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:00.639748  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:22:00.663810  248387 api_server.go:72] duration metric: took 4m16.69659563s to wait for apiserver process to appear ...
	I1031 00:22:00.663846  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:22:00.663904  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:00.663980  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:00.705584  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:00.705611  248387 cri.go:89] found id: ""
	I1031 00:22:00.705620  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:00.705672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.710031  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:00.710113  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:00.747821  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:00.747850  248387 cri.go:89] found id: ""
	I1031 00:22:00.747861  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:00.747926  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.752647  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:00.752733  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:00.802165  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:00.802200  248387 cri.go:89] found id: ""
	I1031 00:22:00.802210  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:00.802274  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.807367  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:00.807451  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:00.846633  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:00.846661  248387 cri.go:89] found id: ""
	I1031 00:22:00.846670  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:00.846736  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.851197  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:00.851282  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:00.891522  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:00.891549  248387 cri.go:89] found id: ""
	I1031 00:22:00.891559  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:00.891624  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.896269  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:00.896369  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:00.937565  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:00.937594  248387 cri.go:89] found id: ""
	I1031 00:22:00.937606  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:00.937672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.942205  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:00.942287  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:00.984788  248387 cri.go:89] found id: ""
	I1031 00:22:00.984814  248387 logs.go:284] 0 containers: []
	W1031 00:22:00.984821  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:00.984827  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:00.984883  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:01.032572  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.032601  248387 cri.go:89] found id: ""
	I1031 00:22:01.032621  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:01.032685  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:01.037253  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:01.037280  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:01.096027  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:01.096065  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:01.166608  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166786  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166925  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.167075  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:01.188441  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:01.188473  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:01.238925  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:01.238961  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:01.278987  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:01.279024  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:01.340249  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:01.340284  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:01.381155  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:01.381191  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.421808  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:01.421842  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:01.817836  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:01.817877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:01.832590  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:01.832620  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:01.961348  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:01.961384  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:02.023997  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:02.024055  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:02.087279  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087321  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:02.087437  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:02.087460  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087476  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087485  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087495  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:02.087513  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087527  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:12.090012  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:22:12.096458  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:22:12.097833  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:22:12.097860  248387 api_server.go:131] duration metric: took 11.434005759s to wait for apiserver health ...
	I1031 00:22:12.097872  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:22:12.097901  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:12.098004  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:12.161098  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.161129  248387 cri.go:89] found id: ""
	I1031 00:22:12.161140  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:12.161199  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.166236  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:12.166325  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:12.208793  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:12.208815  248387 cri.go:89] found id: ""
	I1031 00:22:12.208824  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:12.208871  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.213722  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:12.213791  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:12.256006  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.256036  248387 cri.go:89] found id: ""
	I1031 00:22:12.256046  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:12.256116  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.260468  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:12.260546  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:12.305580  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.305608  248387 cri.go:89] found id: ""
	I1031 00:22:12.305618  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:12.305687  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.313321  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:12.313390  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:12.359900  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.359928  248387 cri.go:89] found id: ""
	I1031 00:22:12.359939  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:12.360003  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.364087  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:12.364171  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:12.403635  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.403660  248387 cri.go:89] found id: ""
	I1031 00:22:12.403675  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:12.403743  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.408014  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:12.408087  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:12.449718  248387 cri.go:89] found id: ""
	I1031 00:22:12.449741  248387 logs.go:284] 0 containers: []
	W1031 00:22:12.449748  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:12.449753  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:12.449802  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:12.490301  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.490330  248387 cri.go:89] found id: ""
	I1031 00:22:12.490340  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:12.490396  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.495061  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:12.495125  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.537124  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:12.537163  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.597600  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:12.597642  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.637344  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:12.637385  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:12.691076  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:12.691107  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:12.820546  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:12.820578  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.871913  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:12.871953  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.914661  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:12.914705  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.965771  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:12.965810  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:13.352819  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:13.352862  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:13.424722  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.424906  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425062  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425220  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.447363  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:13.447393  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:13.462468  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:13.462502  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:13.507930  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.507960  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:13.508045  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:13.508060  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508072  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508084  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508097  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.508107  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.508114  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:23.516544  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:22:23.516574  248387 system_pods.go:61] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.516579  248387 system_pods.go:61] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.516584  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.516588  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.516592  248387 system_pods.go:61] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.516597  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.516604  248387 system_pods.go:61] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.516613  248387 system_pods.go:61] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.516620  248387 system_pods.go:74] duration metric: took 11.418741675s to wait for pod list to return data ...
	I1031 00:22:23.516630  248387 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:22:23.520026  248387 default_sa.go:45] found service account: "default"
	I1031 00:22:23.520050  248387 default_sa.go:55] duration metric: took 3.413856ms for default service account to be created ...
	I1031 00:22:23.520058  248387 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:22:23.526672  248387 system_pods.go:86] 8 kube-system pods found
	I1031 00:22:23.526704  248387 system_pods.go:89] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.526712  248387 system_pods.go:89] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.526719  248387 system_pods.go:89] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.526729  248387 system_pods.go:89] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.526736  248387 system_pods.go:89] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.526753  248387 system_pods.go:89] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.526765  248387 system_pods.go:89] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.526776  248387 system_pods.go:89] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.526789  248387 system_pods.go:126] duration metric: took 6.724214ms to wait for k8s-apps to be running ...
	I1031 00:22:23.526801  248387 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:22:23.526862  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:22:23.546006  248387 system_svc.go:56] duration metric: took 19.183151ms WaitForService to wait for kubelet.
	I1031 00:22:23.546038  248387 kubeadm.go:581] duration metric: took 4m39.57883274s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:22:23.546066  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:22:23.550930  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:22:23.550975  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:22:23.551004  248387 node_conditions.go:105] duration metric: took 4.930974ms to run NodePressure ...
	I1031 00:22:23.551041  248387 start.go:228] waiting for startup goroutines ...
	I1031 00:22:23.551053  248387 start.go:233] waiting for cluster config update ...
	I1031 00:22:23.551064  248387 start.go:242] writing updated cluster config ...
	I1031 00:22:23.551346  248387 ssh_runner.go:195] Run: rm -f paused
	I1031 00:22:23.603812  248387 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:22:23.605925  248387 out.go:177] * Done! kubectl is now configured to use "no-preload-640155" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 00:12:49 UTC, ends at Tue 2023-10-31 00:27:27 UTC. --
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.426338797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712047426321194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ef2fb281-7ffc-4859-8f25-38c69c176d5f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.427106805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c9c595d6-4517-4b5c-ad93-78ba5f8f62a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.427169720Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c9c595d6-4517-4b5c-ad93-78ba5f8f62a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.427389241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11,PodSandboxId:c2da5d55b35eef79c0f1d94dca3535d4791cfd0aa646756a2aeb2fde5a160852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711503371081417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d33e4-0d28-4efb-8d30-d5a05d04b61c,},Annotations:map[string]string{io.kubernetes.container.hash: 7328c257,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b,PodSandboxId:7f1e8084edcb44248ddafdd2e2ecfc747e71b1881df67aa1e868d4b3734346b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711503008272380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77gzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 8505e5c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837,PodSandboxId:3796f9fef2d869e41f233f1ce09fa13b899aec34351dba9af7dfeeec119f35a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711502151909643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pjtg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c771175-3c51-4988-8b90-58ff0e33a5f8,},Annotations:map[string]string{io.kubernetes.container.hash: ce4a43d1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba,PodSandboxId:22670270d17793ff3d376e2d98ad881063cacd2a724649785bd9a0dd923c188f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711478548158629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: df1f27d844d6669a28f6800dcf5d9773,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc,PodSandboxId:7f97ce24e0a5d8e765ecadd59dc52a3ebff5704a7d4d57d8c35cd9a380dc12d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711478161631940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442209055b3cd7cb3
c907644e1b24e12,},Annotations:map[string]string{io.kubernetes.container.hash: f7fa274a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff,PodSandboxId:eb12bb06257706a7cbf2d1ccdf84e68c056a4cda563b1f90fda5e93e7baac002,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711477885986899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5747af2482af7359fd79d651fa78982a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d,PodSandboxId:6c9a1afeb465f99437c1dc89dd3236f16b4ae59a8c5e43dccef61d5619771b68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711477858320375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 530401226fca04519b09aba8fa4e5da5,},Annotations:map[string]string{io.kubernetes.container.hash: 208c43ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c9c595d6-4517-4b5c-ad93-78ba5f8f62a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.477677152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=59663ba4-b15e-44c2-a75c-2f9cfcf5a9d6 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.477760037Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=59663ba4-b15e-44c2-a75c-2f9cfcf5a9d6 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.479402932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7e39f7e6-d6f6-466f-86de-a4cf462c2463 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.480092740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712047480072630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7e39f7e6-d6f6-466f-86de-a4cf462c2463 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.481511383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f76d75f-1ae9-4563-8928-e125b717409c name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.481573399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f76d75f-1ae9-4563-8928-e125b717409c name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.482513044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11,PodSandboxId:c2da5d55b35eef79c0f1d94dca3535d4791cfd0aa646756a2aeb2fde5a160852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711503371081417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d33e4-0d28-4efb-8d30-d5a05d04b61c,},Annotations:map[string]string{io.kubernetes.container.hash: 7328c257,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b,PodSandboxId:7f1e8084edcb44248ddafdd2e2ecfc747e71b1881df67aa1e868d4b3734346b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711503008272380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77gzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 8505e5c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837,PodSandboxId:3796f9fef2d869e41f233f1ce09fa13b899aec34351dba9af7dfeeec119f35a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711502151909643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pjtg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c771175-3c51-4988-8b90-58ff0e33a5f8,},Annotations:map[string]string{io.kubernetes.container.hash: ce4a43d1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba,PodSandboxId:22670270d17793ff3d376e2d98ad881063cacd2a724649785bd9a0dd923c188f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711478548158629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: df1f27d844d6669a28f6800dcf5d9773,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc,PodSandboxId:7f97ce24e0a5d8e765ecadd59dc52a3ebff5704a7d4d57d8c35cd9a380dc12d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711478161631940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442209055b3cd7cb3
c907644e1b24e12,},Annotations:map[string]string{io.kubernetes.container.hash: f7fa274a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff,PodSandboxId:eb12bb06257706a7cbf2d1ccdf84e68c056a4cda563b1f90fda5e93e7baac002,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711477885986899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5747af2482af7359fd79d651fa78982a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d,PodSandboxId:6c9a1afeb465f99437c1dc89dd3236f16b4ae59a8c5e43dccef61d5619771b68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711477858320375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 530401226fca04519b09aba8fa4e5da5,},Annotations:map[string]string{io.kubernetes.container.hash: 208c43ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f76d75f-1ae9-4563-8928-e125b717409c name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.541137132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=07f0a71b-4e7d-4c8a-8405-0fc5f199ca04 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.541225958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=07f0a71b-4e7d-4c8a-8405-0fc5f199ca04 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.542341016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b76484c5-57ad-4779-b2e2-6045258181b1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.543087195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712047543069879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b76484c5-57ad-4779-b2e2-6045258181b1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.543610919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b57be98-2040-43bd-b0fb-5416c9f70adc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.543679599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b57be98-2040-43bd-b0fb-5416c9f70adc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.543909408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11,PodSandboxId:c2da5d55b35eef79c0f1d94dca3535d4791cfd0aa646756a2aeb2fde5a160852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711503371081417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d33e4-0d28-4efb-8d30-d5a05d04b61c,},Annotations:map[string]string{io.kubernetes.container.hash: 7328c257,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b,PodSandboxId:7f1e8084edcb44248ddafdd2e2ecfc747e71b1881df67aa1e868d4b3734346b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711503008272380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77gzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 8505e5c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837,PodSandboxId:3796f9fef2d869e41f233f1ce09fa13b899aec34351dba9af7dfeeec119f35a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711502151909643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pjtg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c771175-3c51-4988-8b90-58ff0e33a5f8,},Annotations:map[string]string{io.kubernetes.container.hash: ce4a43d1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba,PodSandboxId:22670270d17793ff3d376e2d98ad881063cacd2a724649785bd9a0dd923c188f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711478548158629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: df1f27d844d6669a28f6800dcf5d9773,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc,PodSandboxId:7f97ce24e0a5d8e765ecadd59dc52a3ebff5704a7d4d57d8c35cd9a380dc12d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711478161631940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442209055b3cd7cb3
c907644e1b24e12,},Annotations:map[string]string{io.kubernetes.container.hash: f7fa274a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff,PodSandboxId:eb12bb06257706a7cbf2d1ccdf84e68c056a4cda563b1f90fda5e93e7baac002,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711477885986899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5747af2482af7359fd79d651fa78982a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d,PodSandboxId:6c9a1afeb465f99437c1dc89dd3236f16b4ae59a8c5e43dccef61d5619771b68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711477858320375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 530401226fca04519b09aba8fa4e5da5,},Annotations:map[string]string{io.kubernetes.container.hash: 208c43ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b57be98-2040-43bd-b0fb-5416c9f70adc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.591462301Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e3e7700b-b281-4285-b68d-581c5c4001da name=/runtime.v1.RuntimeService/Version
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.591527047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e3e7700b-b281-4285-b68d-581c5c4001da name=/runtime.v1.RuntimeService/Version
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.593163377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bd515492-467e-4805-9bb9-48ecca47f2c7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.593612761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712047593599152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bd515492-467e-4805-9bb9-48ecca47f2c7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.594449340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e25d6c8b-a359-4ae8-a6aa-7a0f90d27b5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.594601768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e25d6c8b-a359-4ae8-a6aa-7a0f90d27b5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:27:27 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:27:27.594785599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11,PodSandboxId:c2da5d55b35eef79c0f1d94dca3535d4791cfd0aa646756a2aeb2fde5a160852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711503371081417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d33e4-0d28-4efb-8d30-d5a05d04b61c,},Annotations:map[string]string{io.kubernetes.container.hash: 7328c257,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b,PodSandboxId:7f1e8084edcb44248ddafdd2e2ecfc747e71b1881df67aa1e868d4b3734346b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711503008272380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77gzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 8505e5c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837,PodSandboxId:3796f9fef2d869e41f233f1ce09fa13b899aec34351dba9af7dfeeec119f35a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711502151909643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pjtg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c771175-3c51-4988-8b90-58ff0e33a5f8,},Annotations:map[string]string{io.kubernetes.container.hash: ce4a43d1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba,PodSandboxId:22670270d17793ff3d376e2d98ad881063cacd2a724649785bd9a0dd923c188f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711478548158629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: df1f27d844d6669a28f6800dcf5d9773,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc,PodSandboxId:7f97ce24e0a5d8e765ecadd59dc52a3ebff5704a7d4d57d8c35cd9a380dc12d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711478161631940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442209055b3cd7cb3
c907644e1b24e12,},Annotations:map[string]string{io.kubernetes.container.hash: f7fa274a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff,PodSandboxId:eb12bb06257706a7cbf2d1ccdf84e68c056a4cda563b1f90fda5e93e7baac002,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711477885986899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5747af2482af7359fd79d651fa78982a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d,PodSandboxId:6c9a1afeb465f99437c1dc89dd3236f16b4ae59a8c5e43dccef61d5619771b68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711477858320375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 530401226fca04519b09aba8fa4e5da5,},Annotations:map[string]string{io.kubernetes.container.hash: 208c43ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e25d6c8b-a359-4ae8-a6aa-7a0f90d27b5d name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	813f1afbf382a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c2da5d55b35ee       storage-provisioner
	cc2e201d615c2       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   7f1e8084edcb4       kube-proxy-77gzz
	7aef40c6e4bfe       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   3796f9fef2d86       coredns-5dd5756b68-pjtg4
	f64c01c7bd84f       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   22670270d1779       kube-scheduler-default-k8s-diff-port-892233
	69023a2f35d6d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   7f97ce24e0a5d       etcd-default-k8s-diff-port-892233
	5f0d1f50cf5cd       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   eb12bb0625770       kube-controller-manager-default-k8s-diff-port-892233
	68cebba71341b       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   6c9a1afeb465f       kube-apiserver-default-k8s-diff-port-892233
	
	* 
	* ==> coredns [7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:60598 - 57135 "HINFO IN 3441151810271889532.4856826152383992695. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010174265s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-892233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-892233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=default-k8s-diff-port-892233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T00_18_06_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 00:18:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-892233
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 00:27:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:23:34 +0000   Tue, 31 Oct 2023 00:17:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:23:34 +0000   Tue, 31 Oct 2023 00:17:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:23:34 +0000   Tue, 31 Oct 2023 00:17:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:23:34 +0000   Tue, 31 Oct 2023 00:18:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    default-k8s-diff-port-892233
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c0f68a3c36a4e5da9f1472b2df10596
	  System UUID:                5c0f68a3-c36a-4e5d-a9f1-472b2df10596
	  Boot ID:                    45d6a9e1-a1f1-47d9-a4b7-7aae0c4f98c9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-pjtg4                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-default-k8s-diff-port-892233                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-892233             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-892233    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-77gzz                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-default-k8s-diff-port-892233             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-8pc87                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node default-k8s-diff-port-892233 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s                  kubelet          Node default-k8s-diff-port-892233 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node default-k8s-diff-port-892233 event: Registered Node default-k8s-diff-port-892233 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct31 00:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069357] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.549182] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.557881] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156785] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.572054] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct31 00:13] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.133449] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.171002] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.129828] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.239130] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +18.355885] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[ +19.559307] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 00:17] systemd-fstab-generator[3543]: Ignoring "noauto" for root device
	[Oct31 00:18] systemd-fstab-generator[3871]: Ignoring "noauto" for root device
	[ +13.447169] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.318558] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc] <==
	* {"level":"info","ts":"2023-10-31T00:18:00.31274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(7818493287602331880)"}
	{"level":"info","ts":"2023-10-31T00:18:00.312969Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8","added-peer-id":"6c80de388e5020e8","added-peer-peer-urls":["https://192.168.39.2:2380"]}
	{"level":"info","ts":"2023-10-31T00:18:00.328959Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-31T00:18:00.329075Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2023-10-31T00:18:00.329256Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2023-10-31T00:18:00.33638Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T00:18:00.336312Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6c80de388e5020e8","initial-advertise-peer-urls":["https://192.168.39.2:2380"],"listen-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T00:18:00.562003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T00:18:00.562169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T00:18:00.562297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgPreVoteResp from 6c80de388e5020e8 at term 1"}
	{"level":"info","ts":"2023-10-31T00:18:00.562385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T00:18:00.562448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgVoteResp from 6c80de388e5020e8 at term 2"}
	{"level":"info","ts":"2023-10-31T00:18:00.562538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became leader at term 2"}
	{"level":"info","ts":"2023-10-31T00:18:00.562602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6c80de388e5020e8 elected leader 6c80de388e5020e8 at term 2"}
	{"level":"info","ts":"2023-10-31T00:18:00.56706Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6c80de388e5020e8","local-member-attributes":"{Name:default-k8s-diff-port-892233 ClientURLs:[https://192.168.39.2:2379]}","request-path":"/0/members/6c80de388e5020e8/attributes","cluster-id":"e20ba2e00cb0e827","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T00:18:00.567191Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:18:00.568764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T00:18:00.569082Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:18:00.569957Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:18:00.570983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.2:2379"}
	{"level":"info","ts":"2023-10-31T00:18:00.5736Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:18:00.574023Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:18:00.574158Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:18:00.581222Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T00:18:00.581346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:27:27 up 14 min,  0 users,  load average: 0.16, 0.29, 0.26
	Linux default-k8s-diff-port-892233 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d] <==
	* W1031 00:23:03.529290       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:23:03.529912       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:23:03.529969       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:23:03.530146       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:23:03.530217       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:23:03.531249       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:24:02.415637       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:24:03.530930       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:24:03.531076       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:24:03.531108       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:24:03.532216       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:24:03.532311       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:24:03.532410       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:25:02.415503       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1031 00:26:02.415274       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:26:03.532327       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:26:03.532473       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:26:03.532486       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:26:03.532563       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:26:03.532728       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:26:03.534149       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:27:02.415931       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff] <==
	* I1031 00:21:49.619137       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:22:19.048084       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:22:19.627607       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:22:49.059359       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:22:49.637112       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:23:19.066630       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:23:19.645918       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:23:49.072122       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:23:49.656695       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:24:10.334482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="428.982µs"
	E1031 00:24:19.078462       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:24:19.667209       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:24:24.330220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="129.353µs"
	E1031 00:24:49.086098       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:24:49.677775       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:25:19.093054       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:25:19.686743       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:25:49.100345       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:25:49.695783       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:26:19.106469       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:26:19.703987       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:26:49.112757       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:26:49.713333       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:27:19.119097       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:27:19.725596       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b] <==
	* I1031 00:18:23.438712       1 server_others.go:69] "Using iptables proxy"
	I1031 00:18:23.472984       1 node.go:141] Successfully retrieved node IP: 192.168.39.2
	I1031 00:18:23.639673       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 00:18:23.639740       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 00:18:23.645001       1 server_others.go:152] "Using iptables Proxier"
	I1031 00:18:23.646139       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 00:18:23.646421       1 server.go:846] "Version info" version="v1.28.3"
	I1031 00:18:23.646431       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:18:23.654478       1 config.go:188] "Starting service config controller"
	I1031 00:18:23.655425       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 00:18:23.655674       1 config.go:97] "Starting endpoint slice config controller"
	I1031 00:18:23.656896       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 00:18:23.659044       1 config.go:315] "Starting node config controller"
	I1031 00:18:23.659089       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 00:18:23.756090       1 shared_informer.go:318] Caches are synced for service config
	I1031 00:18:23.757385       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 00:18:23.759234       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba] <==
	* W1031 00:18:02.578597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:18:02.578605       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1031 00:18:02.578897       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 00:18:02.579105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1031 00:18:03.471528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:03.471636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 00:18:03.530606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:18:03.531951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1031 00:18:03.558710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:03.558765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1031 00:18:03.592304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 00:18:03.592401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 00:18:03.685191       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 00:18:03.685289       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 00:18:03.702086       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 00:18:03.702235       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 00:18:03.734648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:03.734894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1031 00:18:03.815647       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 00:18:03.815745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 00:18:03.853349       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 00:18:03.853444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1031 00:18:03.872321       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 00:18:03.872415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1031 00:18:06.557592       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 00:12:49 UTC, ends at Tue 2023-10-31 00:27:28 UTC. --
	Oct 31 00:24:39 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:24:39.314200    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:24:50 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:24:50.314931    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:25:01 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:25:01.313938    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:25:06 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:25:06.402757    3878 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:25:06 default-k8s-diff-port-892233 kubelet[3878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:25:06 default-k8s-diff-port-892233 kubelet[3878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:25:06 default-k8s-diff-port-892233 kubelet[3878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:25:15 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:25:15.314075    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:25:29 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:25:29.314370    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:25:42 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:25:42.313942    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:25:57 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:25:57.313605    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:26:06 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:26:06.401398    3878 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:26:06 default-k8s-diff-port-892233 kubelet[3878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:26:06 default-k8s-diff-port-892233 kubelet[3878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:26:06 default-k8s-diff-port-892233 kubelet[3878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:26:08 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:26:08.314645    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:26:22 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:26:22.314198    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:26:37 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:26:37.314205    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:26:51 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:26:51.314254    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:27:03 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:27:03.314255    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:27:06 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:27:06.402328    3878 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:27:06 default-k8s-diff-port-892233 kubelet[3878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:27:06 default-k8s-diff-port-892233 kubelet[3878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:27:06 default-k8s-diff-port-892233 kubelet[3878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:27:17 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:27:17.314468    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	
	* 
	* ==> storage-provisioner [813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11] <==
	* I1031 00:18:23.679879       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 00:18:23.691765       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 00:18:23.692068       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 00:18:23.701184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 00:18:23.701946       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-892233_0ec05bdf-f9e5-4157-abaa-89a25bfea216!
	I1031 00:18:23.706506       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c9cff2d-4c51-447b-9111-12ba65c70537", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-892233_0ec05bdf-f9e5-4157-abaa-89a25bfea216 became leader
	I1031 00:18:23.803725       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-892233_0ec05bdf-f9e5-4157-abaa-89a25bfea216!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-892233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-8pc87
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-892233 describe pod metrics-server-57f55c9bc5-8pc87
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892233 describe pod metrics-server-57f55c9bc5-8pc87: exit status 1 (75.109908ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-8pc87" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-892233 describe pod metrics-server-57f55c9bc5-8pc87: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1031 00:20:53.680549  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1031 00:22:08.184743  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-225140 -n old-k8s-version-225140
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-31 00:29:27.779321594 +0000 UTC m=+5270.029338513
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-225140 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-225140 logs -n 25: (1.693616182s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-options-344463                                 | cert-options-344463          | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:02 UTC |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-225140        | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-640155             | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:06 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-078843            | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221554 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | disable-driver-mounts-221554                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:07 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-225140             | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:20 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-892233  | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-640155                  | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:22 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-078843                 | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC | 31 Oct 23 00:17 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-892233       | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC | 31 Oct 23 00:18 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:09:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:09:59.171110  249055 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:09:59.171372  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171383  249055 out.go:309] Setting ErrFile to fd 2...
	I1031 00:09:59.171387  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171591  249055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:09:59.172151  249055 out.go:303] Setting JSON to false
	I1031 00:09:59.173091  249055 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28351,"bootTime":1698682648,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:09:59.173154  249055 start.go:138] virtualization: kvm guest
	I1031 00:09:59.175712  249055 out.go:177] * [default-k8s-diff-port-892233] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:09:59.177218  249055 notify.go:220] Checking for updates...
	I1031 00:09:59.177238  249055 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:09:59.178590  249055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:09:59.179936  249055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:09:59.181243  249055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:09:59.182619  249055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:09:59.184021  249055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:09:59.185755  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:09:59.186187  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.186242  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.200537  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I1031 00:09:59.201002  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.201576  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.201596  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.201949  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.202159  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.202362  249055 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:09:59.202635  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.202680  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.216197  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I1031 00:09:59.216575  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.216998  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.217027  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.217349  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.217537  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.250565  249055 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 00:09:59.251974  249055 start.go:298] selected driver: kvm2
	I1031 00:09:59.251988  249055 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.252123  249055 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:09:59.253132  249055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.253220  249055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:09:59.266948  249055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:09:59.267297  249055 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 00:09:59.267362  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:09:59.267383  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:09:59.267401  249055 start_flags.go:323] config:
	{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-89223
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.267557  249055 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.269225  249055 out.go:177] * Starting control plane node default-k8s-diff-port-892233 in cluster default-k8s-diff-port-892233
	I1031 00:09:57.481224  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:00.553221  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:09:59.270407  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:09:59.270449  249055 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:09:59.270460  249055 cache.go:56] Caching tarball of preloaded images
	I1031 00:09:59.270553  249055 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:09:59.270569  249055 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:09:59.270702  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:09:59.270937  249055 start.go:365] acquiring machines lock for default-k8s-diff-port-892233: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:10:06.633217  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:09.705265  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:15.785240  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:18.857227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:24.937215  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:28.009292  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:34.089205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:37.161208  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:43.241288  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:46.313160  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:52.393273  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:55.465205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:01.545192  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:04.617227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:10.697233  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:13.769258  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:19.849250  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:22.921270  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:29.001178  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:32.073257  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:38.153271  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:41.225244  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:47.305235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:50.377235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:53.381665  248387 start.go:369] acquired machines lock for "no-preload-640155" in 4m7.945210729s
	I1031 00:11:53.381722  248387 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:11:53.381734  248387 fix.go:54] fixHost starting: 
	I1031 00:11:53.382372  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:11:53.382418  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:11:53.397155  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1031 00:11:53.397704  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:11:53.398181  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:11:53.398206  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:11:53.398561  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:11:53.398761  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:11:53.398909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:11:53.400611  248387 fix.go:102] recreateIfNeeded on no-preload-640155: state=Stopped err=<nil>
	I1031 00:11:53.400634  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	W1031 00:11:53.400782  248387 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:11:53.402394  248387 out.go:177] * Restarting existing kvm2 VM for "no-preload-640155" ...
	I1031 00:11:53.403767  248387 main.go:141] libmachine: (no-preload-640155) Calling .Start
	I1031 00:11:53.403944  248387 main.go:141] libmachine: (no-preload-640155) Ensuring networks are active...
	I1031 00:11:53.404678  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network default is active
	I1031 00:11:53.405127  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network mk-no-preload-640155 is active
	I1031 00:11:53.405642  248387 main.go:141] libmachine: (no-preload-640155) Getting domain xml...
	I1031 00:11:53.406300  248387 main.go:141] libmachine: (no-preload-640155) Creating domain...
	I1031 00:11:54.646418  248387 main.go:141] libmachine: (no-preload-640155) Waiting to get IP...
	I1031 00:11:54.647560  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.647956  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.648034  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.647947  249366 retry.go:31] will retry after 237.521879ms: waiting for machine to come up
	I1031 00:11:54.887446  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.887861  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.887895  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.887804  249366 retry.go:31] will retry after 320.996838ms: waiting for machine to come up
	I1031 00:11:53.379251  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:11:53.379302  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:11:53.381458  248084 machine.go:91] provisioned docker machine in 4m37.397131013s
	I1031 00:11:53.381513  248084 fix.go:56] fixHost completed within 4m37.420319931s
	I1031 00:11:53.381528  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 4m37.420354195s
	W1031 00:11:53.381569  248084 start.go:691] error starting host: provision: host is not running
	W1031 00:11:53.381676  248084 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1031 00:11:53.381687  248084 start.go:706] Will try again in 5 seconds ...
	I1031 00:11:55.210309  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.210784  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.210818  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.210728  249366 retry.go:31] will retry after 412.198071ms: waiting for machine to come up
	I1031 00:11:55.624299  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.624689  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.624721  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.624647  249366 retry.go:31] will retry after 596.339141ms: waiting for machine to come up
	I1031 00:11:56.222381  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.222918  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.222952  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.222864  249366 retry.go:31] will retry after 640.775314ms: waiting for machine to come up
	I1031 00:11:56.865881  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.866355  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.866394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.866321  249366 retry.go:31] will retry after 797.697217ms: waiting for machine to come up
	I1031 00:11:57.665413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:57.665930  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:57.665971  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:57.665871  249366 retry.go:31] will retry after 808.934364ms: waiting for machine to come up
	I1031 00:11:58.476161  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:58.476620  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:58.476651  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:58.476582  249366 retry.go:31] will retry after 1.198576442s: waiting for machine to come up
	I1031 00:11:59.676957  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:59.677540  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:59.677575  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:59.677462  249366 retry.go:31] will retry after 1.122967081s: waiting for machine to come up
	I1031 00:11:58.383586  248084 start.go:365] acquiring machines lock for old-k8s-version-225140: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:12:00.801790  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:00.802278  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:00.802313  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:00.802216  249366 retry.go:31] will retry after 2.182263229s: waiting for machine to come up
	I1031 00:12:02.987870  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:02.988307  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:02.988339  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:02.988235  249366 retry.go:31] will retry after 2.73312352s: waiting for machine to come up
	I1031 00:12:05.723196  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:05.723664  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:05.723695  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:05.723595  249366 retry.go:31] will retry after 2.33306923s: waiting for machine to come up
	I1031 00:12:08.060086  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:08.060364  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:08.060394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:08.060328  249366 retry.go:31] will retry after 2.770780436s: waiting for machine to come up
	I1031 00:12:10.834601  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:10.834995  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:10.835020  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:10.834939  249366 retry.go:31] will retry after 4.389090657s: waiting for machine to come up
	I1031 00:12:16.389786  248718 start.go:369] acquired machines lock for "embed-certs-078843" in 3m38.778041195s
	I1031 00:12:16.389855  248718 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:16.389864  248718 fix.go:54] fixHost starting: 
	I1031 00:12:16.390317  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:16.390362  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:16.407875  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I1031 00:12:16.408273  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:16.408842  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:12:16.408870  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:16.409226  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:16.409404  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:16.409574  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:12:16.410975  248718 fix.go:102] recreateIfNeeded on embed-certs-078843: state=Stopped err=<nil>
	I1031 00:12:16.411013  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	W1031 00:12:16.411196  248718 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:16.413529  248718 out.go:177] * Restarting existing kvm2 VM for "embed-certs-078843" ...
	I1031 00:12:16.414858  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Start
	I1031 00:12:16.415041  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring networks are active...
	I1031 00:12:16.415738  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network default is active
	I1031 00:12:16.416116  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network mk-embed-certs-078843 is active
	I1031 00:12:16.416450  248718 main.go:141] libmachine: (embed-certs-078843) Getting domain xml...
	I1031 00:12:16.417190  248718 main.go:141] libmachine: (embed-certs-078843) Creating domain...
	I1031 00:12:15.226912  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227453  248387 main.go:141] libmachine: (no-preload-640155) Found IP for machine: 192.168.61.168
	I1031 00:12:15.227473  248387 main.go:141] libmachine: (no-preload-640155) Reserving static IP address...
	I1031 00:12:15.227513  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has current primary IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227861  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.227890  248387 main.go:141] libmachine: (no-preload-640155) DBG | skip adding static IP to network mk-no-preload-640155 - found existing host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"}
	I1031 00:12:15.227900  248387 main.go:141] libmachine: (no-preload-640155) Reserved static IP address: 192.168.61.168
	I1031 00:12:15.227919  248387 main.go:141] libmachine: (no-preload-640155) Waiting for SSH to be available...
	I1031 00:12:15.227938  248387 main.go:141] libmachine: (no-preload-640155) DBG | Getting to WaitForSSH function...
	I1031 00:12:15.230076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230450  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.230556  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230578  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH client type: external
	I1031 00:12:15.230601  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa (-rw-------)
	I1031 00:12:15.230646  248387 main.go:141] libmachine: (no-preload-640155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:15.230666  248387 main.go:141] libmachine: (no-preload-640155) DBG | About to run SSH command:
	I1031 00:12:15.230678  248387 main.go:141] libmachine: (no-preload-640155) DBG | exit 0
	I1031 00:12:15.316515  248387 main.go:141] libmachine: (no-preload-640155) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:15.316855  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetConfigRaw
	I1031 00:12:15.317658  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.320306  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.320647  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.320679  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.321008  248387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/config.json ...
	I1031 00:12:15.321252  248387 machine.go:88] provisioning docker machine ...
	I1031 00:12:15.321275  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:15.321492  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321669  248387 buildroot.go:166] provisioning hostname "no-preload-640155"
	I1031 00:12:15.321691  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321858  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.324151  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324480  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.324518  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.324849  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325057  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325237  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.325416  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.325795  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.325815  248387 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-640155 && echo "no-preload-640155" | sudo tee /etc/hostname
	I1031 00:12:15.450048  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-640155
	
	I1031 00:12:15.450079  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.452951  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453298  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.453344  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.453657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453800  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453899  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.454055  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.454540  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.454569  248387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-640155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-640155/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-640155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:15.574041  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:15.574072  248387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:15.574104  248387 buildroot.go:174] setting up certificates
	I1031 00:12:15.574116  248387 provision.go:83] configureAuth start
	I1031 00:12:15.574125  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.574451  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.577558  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578020  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.578059  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578197  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.580453  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.580832  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.580876  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.581078  248387 provision.go:138] copyHostCerts
	I1031 00:12:15.581171  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:15.581184  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:15.581256  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:15.581407  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:15.581420  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:15.581453  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:15.581522  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:15.581530  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:15.581560  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:15.581611  248387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.no-preload-640155 san=[192.168.61.168 192.168.61.168 localhost 127.0.0.1 minikube no-preload-640155]
	I1031 00:12:15.693832  248387 provision.go:172] copyRemoteCerts
	I1031 00:12:15.693906  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:15.693934  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.696811  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697210  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.697258  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697471  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.697683  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.697870  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.698054  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:15.781207  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:15.803665  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:15.826369  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:12:15.849259  248387 provision.go:86] duration metric: configureAuth took 275.127597ms
	I1031 00:12:15.849292  248387 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:15.849476  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:15.849565  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.852413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.852804  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.852848  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.853027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.853227  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853440  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853549  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.853724  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.854104  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.854132  248387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:16.147033  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:16.147078  248387 machine.go:91] provisioned docker machine in 825.808812ms
	I1031 00:12:16.147094  248387 start.go:300] post-start starting for "no-preload-640155" (driver="kvm2")
	I1031 00:12:16.147110  248387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:16.147138  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.147515  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:16.147545  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.150321  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150755  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.150798  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.151155  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.151335  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.151493  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.237897  248387 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:16.242343  248387 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:16.242367  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:16.242440  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:16.242526  248387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:16.242636  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:16.250454  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:16.273390  248387 start.go:303] post-start completed in 126.280341ms
	I1031 00:12:16.273411  248387 fix.go:56] fixHost completed within 22.891678533s
	I1031 00:12:16.273433  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.276291  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276598  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.276630  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276761  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.276989  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277270  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277434  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.277621  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:16.277984  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:16.277998  248387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:16.389581  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711136.336935137
	
	I1031 00:12:16.389607  248387 fix.go:206] guest clock: 1698711136.336935137
	I1031 00:12:16.389621  248387 fix.go:219] Guest: 2023-10-31 00:12:16.336935137 +0000 UTC Remote: 2023-10-31 00:12:16.273414732 +0000 UTC m=+271.294357841 (delta=63.520405ms)
	I1031 00:12:16.389652  248387 fix.go:190] guest clock delta is within tolerance: 63.520405ms
	I1031 00:12:16.389659  248387 start.go:83] releasing machines lock for "no-preload-640155", held for 23.007957251s
	I1031 00:12:16.389694  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.390027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:16.392988  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393466  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.393493  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393639  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394137  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394306  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394401  248387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:16.394449  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.394583  248387 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:16.394619  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.397387  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397690  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397757  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.397785  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397927  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398140  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398174  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.398206  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.398296  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398503  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.398616  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398784  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398936  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.520353  248387 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:16.526647  248387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:16.673048  248387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:16.679657  248387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:16.679738  248387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:16.699616  248387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:16.699643  248387 start.go:472] detecting cgroup driver to use...
	I1031 00:12:16.699706  248387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:16.717466  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:16.729231  248387 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:16.729300  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:16.741665  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:16.754175  248387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:16.855649  248387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:16.990153  248387 docker.go:214] disabling docker service ...
	I1031 00:12:16.990239  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:17.004614  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:17.017251  248387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:17.143006  248387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:17.257321  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:17.271045  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:17.288903  248387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:17.289001  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.298419  248387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:17.298516  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.308045  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.317176  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.327039  248387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:17.337269  248387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:17.345814  248387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:17.345886  248387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:17.359110  248387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:17.369376  248387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:17.480359  248387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:17.658006  248387 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:17.658099  248387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:17.663296  248387 start.go:540] Will wait 60s for crictl version
	I1031 00:12:17.663467  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:17.667483  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:17.709866  248387 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:17.709956  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.757817  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.812918  248387 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:17.814541  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:17.818008  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818445  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:17.818482  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818745  248387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:17.822914  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:17.837885  248387 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:17.837941  248387 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:17.874977  248387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:17.875010  248387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:12:17.875097  248387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.875104  248387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.875130  248387 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.875163  248387 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1031 00:12:17.875181  248387 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.875233  248387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.875297  248387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.875306  248387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876689  248387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.876731  248387 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.876696  248387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.876697  248387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.876695  248387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.876704  248387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1031 00:12:18.053090  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.059240  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1031 00:12:18.059239  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.065016  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.069953  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.071229  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.140026  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.149728  248387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1031 00:12:18.149778  248387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.149835  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.172611  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.238794  248387 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1031 00:12:18.238851  248387 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.238913  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331173  248387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1031 00:12:18.331228  248387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.331279  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331278  248387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1031 00:12:18.331370  248387 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1031 00:12:18.331380  248387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.331401  248387 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.331425  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331441  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331463  248387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1031 00:12:18.331503  248387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.331542  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.331584  248387 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1031 00:12:18.331632  248387 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.331665  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331545  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331591  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.348470  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.348506  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.348570  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.348619  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.484280  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.484369  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1031 00:12:18.484436  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1031 00:12:18.484501  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:18.484532  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.513117  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1031 00:12:18.513211  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1031 00:12:18.513238  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:18.513264  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1031 00:12:18.513307  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:18.513347  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:18.513392  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1031 00:12:18.513515  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:18.541278  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1031 00:12:18.541307  248387 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541340  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1031 00:12:18.541348  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1031 00:12:18.541370  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541416  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1031 00:12:18.541466  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:18.541493  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1031 00:12:18.541547  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1031 00:12:18.541549  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1031 00:12:17.727796  248718 main.go:141] libmachine: (embed-certs-078843) Waiting to get IP...
	I1031 00:12:17.728716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:17.729132  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:17.729165  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:17.729087  249483 retry.go:31] will retry after 294.663443ms: waiting for machine to come up
	I1031 00:12:18.025671  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.026112  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.026145  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.026058  249483 retry.go:31] will retry after 377.887631ms: waiting for machine to come up
	I1031 00:12:18.405434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.405878  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.405961  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.405857  249483 retry.go:31] will retry after 459.989463ms: waiting for machine to come up
	I1031 00:12:18.867094  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.867658  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.867693  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.867590  249483 retry.go:31] will retry after 552.876869ms: waiting for machine to come up
	I1031 00:12:19.422232  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.422678  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.422711  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.422642  249483 retry.go:31] will retry after 574.514705ms: waiting for machine to come up
	I1031 00:12:19.998587  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.999158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.999195  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.999071  249483 retry.go:31] will retry after 903.246228ms: waiting for machine to come up
	I1031 00:12:20.904654  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:20.905083  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:20.905118  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:20.905028  249483 retry.go:31] will retry after 1.161301577s: waiting for machine to come up
	I1031 00:12:22.067416  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:22.067874  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:22.067906  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:22.067843  249483 retry.go:31] will retry after 1.350619049s: waiting for machine to come up
	I1031 00:12:23.419771  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:23.420313  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:23.420343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:23.420276  249483 retry.go:31] will retry after 1.783701579s: waiting for machine to come up
	I1031 00:12:25.206301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:25.206880  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:25.206909  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:25.206820  249483 retry.go:31] will retry after 2.304762715s: waiting for machine to come up
	I1031 00:12:25.834889  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.293473845s)
	I1031 00:12:25.834930  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1031 00:12:25.834949  248387 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (7.293455157s)
	I1031 00:12:25.834967  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:25.834986  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1031 00:12:25.835039  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:28.718454  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.883305744s)
	I1031 00:12:28.718498  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1031 00:12:28.718536  248387 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:28.718602  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:27.513250  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:27.513691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:27.513726  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:27.513617  249483 retry.go:31] will retry after 2.77005827s: waiting for machine to come up
	I1031 00:12:30.287716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:30.288125  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:30.288181  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:30.288095  249483 retry.go:31] will retry after 2.359494113s: waiting for machine to come up
	I1031 00:12:30.082206  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.363538098s)
	I1031 00:12:30.082241  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1031 00:12:30.082284  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:30.082378  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:32.754830  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.672412397s)
	I1031 00:12:32.754865  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1031 00:12:32.754922  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:32.755008  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:34.104402  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.3493522s)
	I1031 00:12:34.104443  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1031 00:12:34.104484  248387 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:34.104528  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:36.966451  249055 start.go:369] acquired machines lock for "default-k8s-diff-port-892233" in 2m37.695455763s
	I1031 00:12:36.966568  249055 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:36.966579  249055 fix.go:54] fixHost starting: 
	I1031 00:12:36.966927  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:36.966965  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:36.985392  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I1031 00:12:36.985889  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:36.986473  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:12:36.986501  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:36.986870  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:36.987100  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:36.987295  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:12:36.989416  249055 fix.go:102] recreateIfNeeded on default-k8s-diff-port-892233: state=Stopped err=<nil>
	I1031 00:12:36.989470  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	W1031 00:12:36.989641  249055 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:36.991746  249055 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-892233" ...
	I1031 00:12:32.648970  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:32.649516  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:32.649563  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:32.649477  249483 retry.go:31] will retry after 2.827972253s: waiting for machine to come up
	I1031 00:12:35.479127  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479655  248718 main.go:141] libmachine: (embed-certs-078843) Found IP for machine: 192.168.50.2
	I1031 00:12:35.479691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has current primary IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479703  248718 main.go:141] libmachine: (embed-certs-078843) Reserving static IP address...
	I1031 00:12:35.480200  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.480259  248718 main.go:141] libmachine: (embed-certs-078843) DBG | skip adding static IP to network mk-embed-certs-078843 - found existing host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"}
	I1031 00:12:35.480299  248718 main.go:141] libmachine: (embed-certs-078843) Reserved static IP address: 192.168.50.2
	I1031 00:12:35.480319  248718 main.go:141] libmachine: (embed-certs-078843) Waiting for SSH to be available...
	I1031 00:12:35.480334  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Getting to WaitForSSH function...
	I1031 00:12:35.482640  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483140  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.483177  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH client type: external
	I1031 00:12:35.483373  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa (-rw-------)
	I1031 00:12:35.483409  248718 main.go:141] libmachine: (embed-certs-078843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:35.483434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | About to run SSH command:
	I1031 00:12:35.483453  248718 main.go:141] libmachine: (embed-certs-078843) DBG | exit 0
	I1031 00:12:35.573283  248718 main.go:141] libmachine: (embed-certs-078843) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:35.573731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetConfigRaw
	I1031 00:12:35.574538  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.577369  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.577820  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.577856  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.578175  248718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/config.json ...
	I1031 00:12:35.578461  248718 machine.go:88] provisioning docker machine ...
	I1031 00:12:35.578486  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:35.578719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.578919  248718 buildroot.go:166] provisioning hostname "embed-certs-078843"
	I1031 00:12:35.578946  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.579137  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.581632  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582041  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.582075  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582185  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.582376  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582556  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582694  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.582864  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.583247  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.583268  248718 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-078843 && echo "embed-certs-078843" | sudo tee /etc/hostname
	I1031 00:12:35.717684  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-078843
	
	I1031 00:12:35.717719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.720882  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721264  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.721299  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721514  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.721732  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.721908  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.722057  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.722318  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.722757  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.722777  248718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-078843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-078843/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-078843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:35.865568  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:35.865626  248718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:35.865667  248718 buildroot.go:174] setting up certificates
	I1031 00:12:35.865682  248718 provision.go:83] configureAuth start
	I1031 00:12:35.865696  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.866070  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.869149  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869571  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.869610  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.872260  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872618  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.872665  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872855  248718 provision.go:138] copyHostCerts
	I1031 00:12:35.872978  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:35.873000  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:35.873069  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:35.873192  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:35.873203  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:35.873234  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:35.873316  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:35.873327  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:35.873352  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:35.873426  248718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.embed-certs-078843 san=[192.168.50.2 192.168.50.2 localhost 127.0.0.1 minikube embed-certs-078843]
	I1031 00:12:36.016430  248718 provision.go:172] copyRemoteCerts
	I1031 00:12:36.016506  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:36.016553  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.019662  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020054  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.020088  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020286  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.020505  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.020658  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.020843  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.111793  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:36.140569  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:36.179708  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:12:36.203348  248718 provision.go:86] duration metric: configureAuth took 337.646698ms
	I1031 00:12:36.203385  248718 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:36.203690  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:36.203835  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.207444  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.207883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.207923  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.208236  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.208498  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208690  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208912  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.209163  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.209521  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.209547  248718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:36.711502  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:36.711535  248718 machine.go:91] provisioned docker machine in 1.133056882s
	I1031 00:12:36.711550  248718 start.go:300] post-start starting for "embed-certs-078843" (driver="kvm2")
	I1031 00:12:36.711563  248718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:36.711587  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.711984  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:36.712027  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.714954  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715374  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.715408  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715610  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.715815  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.716019  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.716192  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.803613  248718 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:36.808855  248718 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:36.808888  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:36.808973  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:36.809100  248718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:36.809240  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:36.818339  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:36.845738  248718 start.go:303] post-start completed in 134.172265ms
	I1031 00:12:36.845765  248718 fix.go:56] fixHost completed within 20.4559017s
	I1031 00:12:36.845788  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.848249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848592  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.848621  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848861  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.849120  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849307  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849462  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.849659  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.850033  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.850047  248718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:36.966267  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711156.912809532
	
	I1031 00:12:36.966293  248718 fix.go:206] guest clock: 1698711156.912809532
	I1031 00:12:36.966303  248718 fix.go:219] Guest: 2023-10-31 00:12:36.912809532 +0000 UTC Remote: 2023-10-31 00:12:36.845768911 +0000 UTC m=+239.388163644 (delta=67.040621ms)
	I1031 00:12:36.966329  248718 fix.go:190] guest clock delta is within tolerance: 67.040621ms
	I1031 00:12:36.966341  248718 start.go:83] releasing machines lock for "embed-certs-078843", held for 20.576516085s
	I1031 00:12:36.966380  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.967388  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:36.970301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970734  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.970766  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970934  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971468  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971683  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971781  248718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:36.971832  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.971921  248718 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:36.971951  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.974873  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975244  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975323  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975420  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975692  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975718  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975759  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975901  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975959  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976068  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976221  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976279  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976358  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.977011  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:37.095751  248718 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:37.101600  248718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:37.244676  248718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:37.253623  248718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:37.253702  248718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:37.272872  248718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:37.272897  248718 start.go:472] detecting cgroup driver to use...
	I1031 00:12:37.272992  248718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:37.290899  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:37.306570  248718 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:37.306633  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:37.321827  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:37.336787  248718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:37.451589  248718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:37.571290  248718 docker.go:214] disabling docker service ...
	I1031 00:12:37.571375  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:37.587764  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:37.600627  248718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:37.733539  248718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:37.850154  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:37.865463  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:37.883661  248718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:37.883728  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.892717  248718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:37.892783  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.901944  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.911061  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.920094  248718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:37.929520  248718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:37.937333  248718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:37.937404  248718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:37.949591  248718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:37.960061  248718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:38.076354  248718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:38.250618  248718 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:38.250688  248718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:38.255979  248718 start.go:540] Will wait 60s for crictl version
	I1031 00:12:38.256036  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:12:38.259822  248718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:38.299812  248718 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:38.299981  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.343088  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.397252  248718 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:36.993369  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Start
	I1031 00:12:36.993641  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring networks are active...
	I1031 00:12:36.994545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network default is active
	I1031 00:12:36.994911  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network mk-default-k8s-diff-port-892233 is active
	I1031 00:12:36.995448  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Getting domain xml...
	I1031 00:12:36.996378  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Creating domain...
	I1031 00:12:38.342502  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting to get IP...
	I1031 00:12:38.343505  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344038  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.344004  249635 retry.go:31] will retry after 206.530958ms: waiting for machine to come up
	I1031 00:12:38.552789  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553109  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.553059  249635 retry.go:31] will retry after 272.962928ms: waiting for machine to come up
	I1031 00:12:38.827741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828288  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.828242  249635 retry.go:31] will retry after 411.85264ms: waiting for machine to come up
	I1031 00:12:35.048294  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1031 00:12:35.048344  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:35.048404  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:36.902739  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.854307965s)
	I1031 00:12:36.902771  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1031 00:12:36.902803  248387 cache_images.go:123] Successfully loaded all cached images
	I1031 00:12:36.902810  248387 cache_images.go:92] LoadImages completed in 19.027785915s
	I1031 00:12:36.902926  248387 ssh_runner.go:195] Run: crio config
	I1031 00:12:36.961891  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:36.961922  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:36.961950  248387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:36.961992  248387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.168 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-640155 NodeName:no-preload-640155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:36.962203  248387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-640155"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:36.962312  248387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-640155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:36.962389  248387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:36.973945  248387 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:36.974026  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:36.987534  248387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1031 00:12:37.006510  248387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:37.025092  248387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1031 00:12:37.045090  248387 ssh_runner.go:195] Run: grep 192.168.61.168	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:37.048822  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:37.061985  248387 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155 for IP: 192.168.61.168
	I1031 00:12:37.062026  248387 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:37.062243  248387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:37.062310  248387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:37.062410  248387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.key
	I1031 00:12:37.062508  248387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key.96e3443b
	I1031 00:12:37.062570  248387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key
	I1031 00:12:37.062707  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:37.062750  248387 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:37.062767  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:37.062832  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:37.062877  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:37.062923  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:37.062987  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:37.063745  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:37.090011  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:37.119401  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:37.148361  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:12:37.173730  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:37.197769  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:37.221625  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:37.244497  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:37.274559  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:37.300372  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:37.332082  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:37.361826  248387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:37.380561  248387 ssh_runner.go:195] Run: openssl version
	I1031 00:12:37.386185  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:37.396710  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401896  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401983  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.407778  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:37.418091  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:37.427985  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432581  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432649  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.438103  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:37.447792  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:37.457689  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462421  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462495  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.468482  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:37.478565  248387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:37.483264  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:37.491175  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:37.498212  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:37.504019  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:37.509730  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:37.516218  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:37.523364  248387 kubeadm.go:404] StartCluster: {Name:no-preload-640155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:37.523465  248387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:37.523522  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:37.576223  248387 cri.go:89] found id: ""
	I1031 00:12:37.576314  248387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:37.586094  248387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:37.586133  248387 kubeadm.go:636] restartCluster start
	I1031 00:12:37.586217  248387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:37.595614  248387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.596791  248387 kubeconfig.go:92] found "no-preload-640155" server: "https://192.168.61.168:8443"
	I1031 00:12:37.600710  248387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:37.610066  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.610137  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.620501  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.620528  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.620578  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.630477  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.131205  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.131335  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.144627  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.631491  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.631587  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.647034  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.131616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.131749  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.148723  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.631171  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.631273  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.645807  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.398862  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:38.401804  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:38.402193  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402475  248718 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:38.407041  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:38.421147  248718 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:38.421228  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:38.461162  248718 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:38.461240  248718 ssh_runner.go:195] Run: which lz4
	I1031 00:12:38.465401  248718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:12:38.469796  248718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:12:38.469833  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:12:40.419642  248718 crio.go:444] Took 1.954260 seconds to copy over tarball
	I1031 00:12:40.419721  248718 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:12:39.241872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242407  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242465  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.242347  249635 retry.go:31] will retry after 371.774477ms: waiting for machine to come up
	I1031 00:12:39.616171  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616708  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616747  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.616671  249635 retry.go:31] will retry after 487.120901ms: waiting for machine to come up
	I1031 00:12:40.105492  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106116  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106151  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.106066  249635 retry.go:31] will retry after 767.19349ms: waiting for machine to come up
	I1031 00:12:40.875432  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.875932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.876009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.875892  249635 retry.go:31] will retry after 976.411998ms: waiting for machine to come up
	I1031 00:12:41.854227  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854759  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854794  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:41.854691  249635 retry.go:31] will retry after 1.041793781s: waiting for machine to come up
	I1031 00:12:42.898223  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898628  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898658  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:42.898577  249635 retry.go:31] will retry after 1.163252223s: waiting for machine to come up
	I1031 00:12:44.064217  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064593  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064626  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:44.064543  249635 retry.go:31] will retry after 1.879015473s: waiting for machine to come up
	I1031 00:12:40.131216  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.131331  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.146846  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:40.630673  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.630747  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.642955  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.131275  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.131410  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.144530  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.631108  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.631219  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.645873  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.131506  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.131641  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.147504  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.630664  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.630769  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.645755  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.131375  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.131503  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.143357  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.631616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.631714  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.647203  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.130693  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.130791  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.143566  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.630736  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.630816  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.642486  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.535831  248718 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.116078442s)
	I1031 00:12:43.535864  248718 crio.go:451] Took 3.116189 seconds to extract the tarball
	I1031 00:12:43.535877  248718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:12:43.579902  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:43.635701  248718 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:12:43.635724  248718 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:12:43.635796  248718 ssh_runner.go:195] Run: crio config
	I1031 00:12:43.714916  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:12:43.714939  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:43.714958  248718 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:43.714976  248718 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-078843 NodeName:embed-certs-078843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:43.715146  248718 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-078843"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:43.715232  248718 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-078843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:43.715295  248718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:43.726847  248718 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:43.726938  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:43.738352  248718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1031 00:12:43.756439  248718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:43.773955  248718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1031 00:12:43.793790  248718 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:43.798155  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:43.811602  248718 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843 for IP: 192.168.50.2
	I1031 00:12:43.811649  248718 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:43.811819  248718 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:43.811877  248718 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:43.811963  248718 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/client.key
	I1031 00:12:43.812051  248718 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key.e10f976c
	I1031 00:12:43.812117  248718 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key
	I1031 00:12:43.812261  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:43.812301  248718 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:43.812317  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:43.812359  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:43.812395  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:43.812430  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:43.812491  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:43.813192  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:43.841097  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:43.867995  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:43.892834  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:12:43.917649  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:43.942299  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:43.971154  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:43.995032  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:44.022277  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:44.047549  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:44.071370  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:44.095933  248718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:44.113479  248718 ssh_runner.go:195] Run: openssl version
	I1031 00:12:44.119266  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:44.133710  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140098  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140180  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.146416  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:44.159207  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:44.171618  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178288  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178375  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.186339  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:44.200864  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:44.212513  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217549  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217616  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.225170  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:44.239600  248718 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:44.244470  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:44.252637  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:44.260635  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:44.269017  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:44.277210  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:44.285394  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:44.293419  248718 kubeadm.go:404] StartCluster: {Name:embed-certs-078843 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:44.293507  248718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:44.293620  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:44.339212  248718 cri.go:89] found id: ""
	I1031 00:12:44.339302  248718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:44.350219  248718 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:44.350249  248718 kubeadm.go:636] restartCluster start
	I1031 00:12:44.350315  248718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:44.360185  248718 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.361826  248718 kubeconfig.go:92] found "embed-certs-078843" server: "https://192.168.50.2:8443"
	I1031 00:12:44.365579  248718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:44.376923  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.377021  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.390684  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.390708  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.390768  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.404614  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.905332  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.905451  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.918162  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.405760  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.405845  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.419071  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.905669  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.905770  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.922243  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.404757  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.404870  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.419662  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.905223  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.905328  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.919993  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.405571  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.405660  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.418433  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.944837  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945386  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945422  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:45.945318  249635 retry.go:31] will retry after 1.840120385s: waiting for machine to come up
	I1031 00:12:47.787276  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787807  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:47.787751  249635 retry.go:31] will retry after 2.306470153s: waiting for machine to come up
	I1031 00:12:45.131185  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.225229  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.237425  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.630872  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.630948  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.644580  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.131199  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.131280  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.142872  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.631467  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.631545  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.648339  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.130861  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.131000  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.146189  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.610939  248387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:47.610999  248387 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:47.611016  248387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:47.611107  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:47.656888  248387 cri.go:89] found id: ""
	I1031 00:12:47.656982  248387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:47.678724  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:47.688879  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:47.688985  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697091  248387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697115  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:47.837056  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.448497  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.639877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.735406  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.824428  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:48.824521  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:48.840207  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.357050  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.857029  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:47.905449  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.905552  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.921939  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.405557  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.405656  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.417674  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.905114  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.905225  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.919218  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.404811  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.404908  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.420062  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.905655  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.905769  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.922828  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.405471  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.405578  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.423259  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.904727  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.904819  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.920673  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.405155  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.405246  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.421731  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.905024  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.905101  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.919385  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:52.404843  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.404985  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.420088  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.095827  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096365  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:50.096281  249635 retry.go:31] will retry after 3.872051375s: waiting for machine to come up
	I1031 00:12:53.970393  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970918  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970956  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:53.970839  249635 retry.go:31] will retry after 5.345847198s: waiting for machine to come up
	I1031 00:12:50.357101  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:50.857024  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.357298  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.380143  248387 api_server.go:72] duration metric: took 2.555721824s to wait for apiserver process to appear ...
	I1031 00:12:51.380180  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:51.380220  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.457683  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.457719  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:54.457733  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.509385  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.509424  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:55.010185  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.017172  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.017201  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:55.510171  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.517062  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.517114  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:56.009671  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:56.017135  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:12:56.026278  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:12:56.026307  248387 api_server.go:131] duration metric: took 4.646117858s to wait for apiserver health ...
	I1031 00:12:56.026319  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:56.026331  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:56.028208  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:12:52.904735  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.904835  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.917320  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.405426  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.405546  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.420386  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.904921  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.905039  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.917303  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:54.377921  248718 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:54.377976  248718 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:54.377991  248718 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:54.378079  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:54.418685  248718 cri.go:89] found id: ""
	I1031 00:12:54.418768  248718 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:54.436536  248718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:54.451466  248718 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:54.451534  248718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464460  248718 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464484  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:54.601286  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.468262  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.664604  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.761171  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.838690  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:55.838793  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:55.857817  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.379368  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.878782  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:57.379756  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.029552  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:12:56.078774  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:12:56.128262  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:12:56.139995  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:12:56.140025  248387 system_pods.go:61] "coredns-5dd5756b68-qbvjb" [92f771d8-381b-4e38-945f-ad5ceae72b80] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:12:56.140035  248387 system_pods.go:61] "etcd-no-preload-640155" [44fcbc32-757b-4406-97ed-88ad76ae4eee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:12:56.140042  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [b92b3dec-827f-4221-8c28-83a738186e52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:12:56.140048  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [62661788-bde2-42b9-9469-a2f2c51ee283] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:12:56.140057  248387 system_pods.go:61] "kube-proxy-rv76j" [293b1dd9-fc85-4647-91c9-874ad357d222] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:12:56.140063  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [6a11d962-b407-467e-b8a0-9a101b16e4d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:12:56.140076  248387 system_pods.go:61] "metrics-server-57f55c9bc5-nm8dj" [3924727e-2734-497d-b1b1-d8f9a0ab095a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:12:56.140090  248387 system_pods.go:61] "storage-provisioner" [f8e0a3fa-eaf1-45e1-afbc-a5b2287e7703] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:12:56.140100  248387 system_pods.go:74] duration metric: took 11.816257ms to wait for pod list to return data ...
	I1031 00:12:56.140110  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:12:56.143298  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:12:56.143327  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:12:56.143365  248387 node_conditions.go:105] duration metric: took 3.247248ms to run NodePressure ...
	I1031 00:12:56.143402  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:56.398227  248387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403101  248387 kubeadm.go:787] kubelet initialised
	I1031 00:12:56.403124  248387 kubeadm.go:788] duration metric: took 4.866042ms waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403134  248387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:12:56.408758  248387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.416185  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416218  248387 pod_ready.go:81] duration metric: took 7.431969ms waiting for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.416229  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416238  248387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.421589  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421611  248387 pod_ready.go:81] duration metric: took 5.364261ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.421619  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421624  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.427046  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427075  248387 pod_ready.go:81] duration metric: took 5.443698ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.427086  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427098  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.534169  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534224  248387 pod_ready.go:81] duration metric: took 107.102474ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.534241  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534255  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332793  248387 pod_ready.go:92] pod "kube-proxy-rv76j" in "kube-system" namespace has status "Ready":"True"
	I1031 00:12:57.332824  248387 pod_ready.go:81] duration metric: took 798.55794ms waiting for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332838  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:59.642186  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:00.818958  248084 start.go:369] acquired machines lock for "old-k8s-version-225140" in 1m2.435313483s
	I1031 00:13:00.819017  248084 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:13:00.819032  248084 fix.go:54] fixHost starting: 
	I1031 00:13:00.819456  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:00.819490  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:00.838737  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1031 00:13:00.839191  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:00.839773  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:13:00.839794  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:00.840290  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:00.840514  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:00.840697  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:13:00.843346  248084 fix.go:102] recreateIfNeeded on old-k8s-version-225140: state=Stopped err=<nil>
	I1031 00:13:00.843381  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	W1031 00:13:00.843658  248084 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:13:00.848997  248084 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-225140" ...
	I1031 00:12:59.318443  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319011  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Found IP for machine: 192.168.39.2
	I1031 00:12:59.319037  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserving static IP address...
	I1031 00:12:59.319070  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has current primary IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319522  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.319557  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserved static IP address: 192.168.39.2
	I1031 00:12:59.319595  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | skip adding static IP to network mk-default-k8s-diff-port-892233 - found existing host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"}
	I1031 00:12:59.319620  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Getting to WaitForSSH function...
	I1031 00:12:59.319653  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for SSH to be available...
	I1031 00:12:59.322357  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322780  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.322819  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322938  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH client type: external
	I1031 00:12:59.322969  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa (-rw-------)
	I1031 00:12:59.323009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:59.323029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | About to run SSH command:
	I1031 00:12:59.323064  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | exit 0
	I1031 00:12:59.421581  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:59.421963  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetConfigRaw
	I1031 00:12:59.422651  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.425540  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.425916  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.425961  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.426201  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:12:59.426454  249055 machine.go:88] provisioning docker machine ...
	I1031 00:12:59.426481  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:59.426720  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.426879  249055 buildroot.go:166] provisioning hostname "default-k8s-diff-port-892233"
	I1031 00:12:59.426898  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.427067  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.429588  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.429937  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.429975  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.430208  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.430403  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430573  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430690  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.430852  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.431368  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.431386  249055 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-892233 && echo "default-k8s-diff-port-892233" | sudo tee /etc/hostname
	I1031 00:12:59.572253  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-892233
	
	I1031 00:12:59.572299  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.575534  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.575858  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.575919  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.576140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.576366  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576592  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576766  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.576919  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.577349  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.577372  249055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-892233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-892233/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-892233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:59.714987  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:59.715020  249055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:59.715079  249055 buildroot.go:174] setting up certificates
	I1031 00:12:59.715094  249055 provision.go:83] configureAuth start
	I1031 00:12:59.715115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.715440  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.718485  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.718900  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.718932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.719039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.721488  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.721844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.721874  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.722068  249055 provision.go:138] copyHostCerts
	I1031 00:12:59.722141  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:59.722155  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:59.722227  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:59.722363  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:59.722377  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:59.722402  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:59.722528  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:59.722538  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:59.722560  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:59.722619  249055 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-892233 san=[192.168.39.2 192.168.39.2 localhost 127.0.0.1 minikube default-k8s-diff-port-892233]
	I1031 00:13:00.038821  249055 provision.go:172] copyRemoteCerts
	I1031 00:13:00.038892  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:00.038924  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.042237  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042585  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.042627  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042753  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.042976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.043252  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.043410  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.130665  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:00.158853  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1031 00:13:00.188023  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:13:00.214990  249055 provision.go:86] duration metric: configureAuth took 499.878655ms
	I1031 00:13:00.215020  249055 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:00.215284  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:00.215445  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.218339  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.218821  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.218861  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.219039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.219282  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219500  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219672  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.219873  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.220371  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.220411  249055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:00.567578  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:00.567663  249055 machine.go:91] provisioned docker machine in 1.141189726s
	I1031 00:13:00.567680  249055 start.go:300] post-start starting for "default-k8s-diff-port-892233" (driver="kvm2")
	I1031 00:13:00.567695  249055 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:00.567719  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.568094  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:00.568134  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.570983  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571434  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.571478  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571649  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.571849  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.572010  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.572173  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.660300  249055 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:00.665751  249055 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:00.665779  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:00.665853  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:00.665958  249055 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:00.666046  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:00.677668  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:00.702125  249055 start.go:303] post-start completed in 134.425173ms
	I1031 00:13:00.702165  249055 fix.go:56] fixHost completed within 23.735576451s
	I1031 00:13:00.702195  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.705554  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.705976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.706029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.706319  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.706545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706722  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.707040  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.707449  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.707470  249055 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:00.818749  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711180.762641951
	
	I1031 00:13:00.818785  249055 fix.go:206] guest clock: 1698711180.762641951
	I1031 00:13:00.818797  249055 fix.go:219] Guest: 2023-10-31 00:13:00.762641951 +0000 UTC Remote: 2023-10-31 00:13:00.70217124 +0000 UTC m=+181.580385758 (delta=60.470711ms)
	I1031 00:13:00.818850  249055 fix.go:190] guest clock delta is within tolerance: 60.470711ms
	I1031 00:13:00.818861  249055 start.go:83] releasing machines lock for "default-k8s-diff-port-892233", held for 23.852333569s
	I1031 00:13:00.818897  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.819199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:00.822674  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823152  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.823194  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823436  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824107  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824336  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824543  249055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:00.824603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.824669  249055 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:00.824698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.827622  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828092  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828149  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828176  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828377  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828420  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828477  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828558  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828638  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828817  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.828926  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.829014  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.829694  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.945937  249055 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:00.951731  249055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:01.099346  249055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:01.106701  249055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:01.106789  249055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:01.122651  249055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:01.122738  249055 start.go:472] detecting cgroup driver to use...
	I1031 00:13:01.122839  249055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:01.140968  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:01.159184  249055 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:01.159267  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:01.176636  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:01.190420  249055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:01.304327  249055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:01.446312  249055 docker.go:214] disabling docker service ...
	I1031 00:13:01.446440  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:01.462043  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:01.478402  249055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:01.618099  249055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:01.745376  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:01.758262  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:01.774927  249055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:13:01.774999  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.784376  249055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:01.784441  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.793769  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.802954  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.813429  249055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:01.822730  249055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:01.832032  249055 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:01.832103  249055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:01.845005  249055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:01.855358  249055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:01.997815  249055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:02.229016  249055 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:02.229090  249055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:02.233980  249055 start.go:540] Will wait 60s for crictl version
	I1031 00:13:02.234044  249055 ssh_runner.go:195] Run: which crictl
	I1031 00:13:02.237901  249055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:02.280450  249055 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:02.280562  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.326608  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.381010  249055 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:57.879480  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.378990  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.401245  248718 api_server.go:72] duration metric: took 2.5625596s to wait for apiserver process to appear ...
	I1031 00:12:58.401294  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:58.401317  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.483261  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.483293  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:01.483309  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.586135  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.586172  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:02.086932  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.095676  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.095714  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:02.586339  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.599335  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.599376  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:03.087312  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:03.095444  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:13:03.107809  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:03.107842  248718 api_server.go:131] duration metric: took 4.706538937s to wait for apiserver health ...
	I1031 00:13:03.107855  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:13:03.107864  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:03.110057  248718 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:02.382546  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:02.386646  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387022  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:02.387068  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387291  249055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:02.393394  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:02.408630  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:13:02.408723  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:02.461303  249055 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:13:02.461388  249055 ssh_runner.go:195] Run: which lz4
	I1031 00:13:02.466160  249055 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:02.472133  249055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:02.472175  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:13:01.647436  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.653247  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.111616  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:03.142561  248718 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:03.210454  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:03.229202  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:03.229253  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:03.229269  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:03.229278  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:03.229289  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:03.229302  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:03.229321  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:03.229339  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:03.229353  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:03.229369  248718 system_pods.go:74] duration metric: took 18.888134ms to wait for pod list to return data ...
	I1031 00:13:03.229379  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:03.269761  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:03.269808  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:03.269821  248718 node_conditions.go:105] duration metric: took 40.435389ms to run NodePressure ...
	I1031 00:13:03.269843  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:03.828792  248718 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840423  248718 kubeadm.go:787] kubelet initialised
	I1031 00:13:03.840449  248718 kubeadm.go:788] duration metric: took 11.631934ms waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840461  248718 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:03.856214  248718 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.885090  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885128  248718 pod_ready.go:81] duration metric: took 28.821802ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.885141  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885169  248718 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.903365  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903468  248718 pod_ready.go:81] duration metric: took 18.286782ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.903494  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903516  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.918470  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918511  248718 pod_ready.go:81] duration metric: took 14.954407ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.918536  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918548  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.933999  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934040  248718 pod_ready.go:81] duration metric: took 15.480835ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.934057  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934068  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.237338  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237374  248718 pod_ready.go:81] duration metric: took 303.296061ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.237389  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237398  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.634179  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634222  248718 pod_ready.go:81] duration metric: took 396.814691ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.634238  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634253  248718 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.035746  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035785  248718 pod_ready.go:81] duration metric: took 401.520697ms waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:05.035801  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035816  248718 pod_ready.go:38] duration metric: took 1.195339888s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:05.035852  248718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:13:05.053467  248718 ops.go:34] apiserver oom_adj: -16
	I1031 00:13:05.053499  248718 kubeadm.go:640] restartCluster took 20.703241237s
	I1031 00:13:05.053510  248718 kubeadm.go:406] StartCluster complete in 20.760104259s
	I1031 00:13:05.053534  248718 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.053649  248718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:13:05.056586  248718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.056927  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:13:05.057035  248718 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:13:05.057123  248718 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-078843"
	I1031 00:13:05.057141  248718 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-078843"
	W1031 00:13:05.057149  248718 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:13:05.057204  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:05.057234  248718 addons.go:69] Setting default-storageclass=true in profile "embed-certs-078843"
	I1031 00:13:05.057211  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.057248  248718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-078843"
	I1031 00:13:05.057647  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057682  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057706  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057743  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057816  248718 addons.go:69] Setting metrics-server=true in profile "embed-certs-078843"
	I1031 00:13:05.057835  248718 addons.go:231] Setting addon metrics-server=true in "embed-certs-078843"
	W1031 00:13:05.057846  248718 addons.go:240] addon metrics-server should already be in state true
	I1031 00:13:05.057940  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.058407  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.058492  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.077590  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I1031 00:13:05.077948  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I1031 00:13:05.078081  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078347  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078769  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.078785  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079028  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.079054  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079408  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085132  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085145  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I1031 00:13:05.085597  248718 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-078843" context rescaled to 1 replicas
	I1031 00:13:05.085640  248718 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:13:05.088029  248718 out.go:177] * Verifying Kubernetes components...
	I1031 00:13:05.085726  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.085922  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.086067  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.089646  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:13:05.089718  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.090571  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.090592  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.091096  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.091945  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.092003  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.095067  248718 addons.go:231] Setting addon default-storageclass=true in "embed-certs-078843"
	W1031 00:13:05.095093  248718 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:13:05.095131  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.095551  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.095608  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.111102  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1031 00:13:05.111739  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.112393  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.112413  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.112797  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.112983  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.114423  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1031 00:13:05.114993  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.115615  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.115634  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.115848  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.116042  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.118503  248718 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:13:05.116288  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.120126  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:13:05.120149  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:13:05.120184  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.120637  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1031 00:13:05.121136  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.121582  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.121601  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.122054  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.122163  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.122536  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.122576  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.124417  248718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:00.852003  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Start
	I1031 00:13:00.853038  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring networks are active...
	I1031 00:13:00.853268  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network default is active
	I1031 00:13:00.853774  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network mk-old-k8s-version-225140 is active
	I1031 00:13:00.854290  248084 main.go:141] libmachine: (old-k8s-version-225140) Getting domain xml...
	I1031 00:13:00.855089  248084 main.go:141] libmachine: (old-k8s-version-225140) Creating domain...
	I1031 00:13:02.250983  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting to get IP...
	I1031 00:13:02.251883  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.252351  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.252421  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.252327  249826 retry.go:31] will retry after 242.989359ms: waiting for machine to come up
	I1031 00:13:02.497099  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.497647  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.497671  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.497581  249826 retry.go:31] will retry after 267.660992ms: waiting for machine to come up
	I1031 00:13:02.767445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.770812  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.770846  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.770757  249826 retry.go:31] will retry after 311.592507ms: waiting for machine to come up
	I1031 00:13:03.085650  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.086233  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.086262  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.086139  249826 retry.go:31] will retry after 594.222148ms: waiting for machine to come up
	I1031 00:13:03.681721  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.682255  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.682286  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.682147  249826 retry.go:31] will retry after 758.043103ms: waiting for machine to come up
	I1031 00:13:04.442274  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:04.443048  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:04.443078  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:04.442997  249826 retry.go:31] will retry after 887.518169ms: waiting for machine to come up
	I1031 00:13:05.332541  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:05.333184  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:05.333212  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:05.333129  249826 retry.go:31] will retry after 851.434462ms: waiting for machine to come up
	I1031 00:13:05.125889  248718 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.125912  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:13:05.125931  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.124466  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.126004  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.126025  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.125276  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.126198  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.126338  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.126414  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.131827  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.131844  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.131883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.131916  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.132049  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.132274  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.132420  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.144729  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I1031 00:13:05.145178  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.145775  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.145795  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.146202  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.146381  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.149644  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.150317  248718 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.150332  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:13:05.150350  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.153417  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.153915  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.153956  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.154082  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.154266  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.154606  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.154731  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.279166  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:13:05.279209  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:13:05.314989  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.318765  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.337844  248718 node_ready.go:35] waiting up to 6m0s for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:05.338209  248718 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 00:13:05.343889  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:13:05.343913  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:13:05.391973  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:05.392002  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:13:05.442745  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.503163864s)
	I1031 00:13:06.822030  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822047  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.506945748s)
	I1031 00:13:06.822097  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822123  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822539  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822568  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822594  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822620  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822641  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822654  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822665  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822689  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822702  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822711  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.823128  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823187  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823196  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.823249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823286  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823305  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.838726  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.838749  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.839036  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.839101  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.839124  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.863966  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.421170822s)
	I1031 00:13:06.864085  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864105  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.864472  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.864499  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.864511  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864520  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.865117  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.865133  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.865136  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.865144  248718 addons.go:467] Verifying addon metrics-server=true in "embed-certs-078843"
	I1031 00:13:06.868351  248718 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:13:06.869950  248718 addons.go:502] enable addons completed in 1.812918702s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:13:07.438581  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.402138  249055 crio.go:444] Took 1.936056 seconds to copy over tarball
	I1031 00:13:04.402221  249055 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:07.956805  249055 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.554540356s)
	I1031 00:13:07.956841  249055 crio.go:451] Took 3.554667 seconds to extract the tarball
	I1031 00:13:07.956854  249055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:08.017763  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:08.072921  249055 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:13:08.072982  249055 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:13:08.073063  249055 ssh_runner.go:195] Run: crio config
	I1031 00:13:08.131013  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:08.131045  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:08.131070  249055 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:08.131099  249055 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-892233 NodeName:default-k8s-diff-port-892233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:13:08.131362  249055 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-892233"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:08.131583  249055 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-892233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1031 00:13:08.131658  249055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:13:08.140884  249055 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:08.140973  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:08.149405  249055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I1031 00:13:08.166006  249055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:08.182874  249055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1031 00:13:08.200304  249055 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:08.203993  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:08.217645  249055 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233 for IP: 192.168.39.2
	I1031 00:13:08.217692  249055 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:08.217873  249055 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:08.217924  249055 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:08.218015  249055 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.key
	I1031 00:13:08.308243  249055 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key.dd3b77ed
	I1031 00:13:08.308354  249055 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key
	I1031 00:13:08.308540  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:08.308606  249055 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:08.308626  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:08.308652  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:08.308678  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:08.308701  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:08.308743  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:08.309489  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:08.339601  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:08.365873  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:08.393028  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:13:08.418983  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:08.445555  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:08.471234  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:08.496657  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:08.522698  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:08.546933  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:08.570645  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:08.596096  249055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:08.615431  249055 ssh_runner.go:195] Run: openssl version
	I1031 00:13:08.621901  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:08.633316  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638479  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638546  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.644750  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:08.656306  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:08.669978  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.675964  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.676033  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.682433  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:08.694215  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:08.706255  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713046  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713147  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.720902  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:08.732062  249055 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:08.737112  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:08.745040  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:08.753046  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:08.759410  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:08.765847  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:08.772651  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:08.779086  249055 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:08.779224  249055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:08.779292  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:08.832024  249055 cri.go:89] found id: ""
	I1031 00:13:08.832096  249055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:08.842618  249055 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:08.842641  249055 kubeadm.go:636] restartCluster start
	I1031 00:13:08.842716  249055 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:08.852209  249055 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.853480  249055 kubeconfig.go:92] found "default-k8s-diff-port-892233" server: "https://192.168.39.2:8444"
	I1031 00:13:08.855965  249055 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:08.865555  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.865617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.877258  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.877285  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.877332  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.887847  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:05.643929  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:05.643958  248387 pod_ready.go:81] duration metric: took 8.31111047s waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.643971  248387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:07.946810  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:06.186224  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:06.186916  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:06.186948  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:06.186867  249826 retry.go:31] will retry after 964.405003ms: waiting for machine to come up
	I1031 00:13:07.153455  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:07.153973  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:07.154006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:07.153917  249826 retry.go:31] will retry after 1.515980724s: waiting for machine to come up
	I1031 00:13:08.671700  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:08.672189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:08.672219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:08.672117  249826 retry.go:31] will retry after 2.254841495s: waiting for machine to come up
	I1031 00:13:09.658372  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:11.938230  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:12.439097  248718 node_ready.go:49] node "embed-certs-078843" has status "Ready":"True"
	I1031 00:13:12.439129  248718 node_ready.go:38] duration metric: took 7.101255254s waiting for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:12.439147  248718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:12.447673  248718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.469967  248718 pod_ready.go:92] pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.470002  248718 pod_ready.go:81] duration metric: took 22.292329ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.470017  248718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482061  248718 pod_ready.go:92] pod "etcd-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.482092  248718 pod_ready.go:81] duration metric: took 12.066806ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482106  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489019  248718 pod_ready.go:92] pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.489052  248718 pod_ready.go:81] duration metric: took 6.936171ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489066  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500686  248718 pod_ready.go:92] pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.500712  248718 pod_ready.go:81] duration metric: took 11.637946ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500722  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:09.388669  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.388776  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.400708  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:09.888027  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.888146  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.900678  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.388004  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.388114  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.403685  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.888198  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.888314  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.900608  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.388239  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.388363  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.404992  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.888425  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.888541  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.900436  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.388293  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.388418  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.404621  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.888037  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.888156  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.900860  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.388276  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.388371  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.400841  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.888124  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.888238  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.903041  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.168791  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:12.169662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.669047  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:10.928893  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:10.929414  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:10.929445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:10.929369  249826 retry.go:31] will retry after 2.792980456s: waiting for machine to come up
	I1031 00:13:13.724006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:13.724430  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:13.724469  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:13.724356  249826 retry.go:31] will retry after 2.555956413s: waiting for machine to come up
	I1031 00:13:12.838631  248718 pod_ready.go:92] pod "kube-proxy-287dq" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.838658  248718 pod_ready.go:81] duration metric: took 337.929955ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.838668  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239513  248718 pod_ready.go:92] pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:13.239541  248718 pod_ready.go:81] duration metric: took 400.86714ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239552  248718 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:15.546507  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.388661  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.388736  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.402388  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:14.888855  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.888965  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.903137  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.388757  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.388868  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.404412  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.888848  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.888984  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.902181  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.388790  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.388913  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.402283  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.888892  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.889035  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.900677  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.388842  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.388983  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.401399  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.888981  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.889099  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.901474  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.387997  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:18.388083  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:18.399745  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.866186  249055 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:18.866263  249055 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:18.866282  249055 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:18.866352  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:18.906125  249055 cri.go:89] found id: ""
	I1031 00:13:18.906214  249055 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:18.921555  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:18.930111  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:18.930193  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938516  249055 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938545  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:19.070700  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:17.167517  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:19.170710  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:16.282473  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:16.282944  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:16.282975  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:16.282900  249826 retry.go:31] will retry after 2.811414756s: waiting for machine to come up
	I1031 00:13:19.096338  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:19.096738  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:19.096760  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:19.096714  249826 retry.go:31] will retry after 3.844203493s: waiting for machine to come up
	I1031 00:13:17.548558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.047074  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.047691  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.139806  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069066882s)
	I1031 00:13:20.139847  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.337823  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.417915  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.499750  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:20.499831  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:20.515735  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.029420  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.529636  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.029757  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.529034  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.029479  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.055542  249055 api_server.go:72] duration metric: took 2.555800185s to wait for apiserver process to appear ...
	I1031 00:13:23.055573  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:23.055591  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:21.667545  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:24.167560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.943000  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.943492  248084 main.go:141] libmachine: (old-k8s-version-225140) Found IP for machine: 192.168.72.65
	I1031 00:13:22.943521  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserving static IP address...
	I1031 00:13:22.943540  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has current primary IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.944080  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.944120  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | skip adding static IP to network mk-old-k8s-version-225140 - found existing host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"}
	I1031 00:13:22.944139  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserved static IP address: 192.168.72.65
	I1031 00:13:22.944160  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Getting to WaitForSSH function...
	I1031 00:13:22.944168  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting for SSH to be available...
	I1031 00:13:22.946799  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.947222  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947416  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH client type: external
	I1031 00:13:22.947448  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa (-rw-------)
	I1031 00:13:22.947508  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:13:22.947534  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | About to run SSH command:
	I1031 00:13:22.947581  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | exit 0
	I1031 00:13:23.045850  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | SSH cmd err, output: <nil>: 
	I1031 00:13:23.046239  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetConfigRaw
	I1031 00:13:23.046996  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.050061  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050464  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.050496  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050789  248084 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/config.json ...
	I1031 00:13:23.051046  248084 machine.go:88] provisioning docker machine ...
	I1031 00:13:23.051070  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:23.051289  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051484  248084 buildroot.go:166] provisioning hostname "old-k8s-version-225140"
	I1031 00:13:23.051511  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051731  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.054157  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054603  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.054636  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054784  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.055085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055291  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055503  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.055718  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.056178  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.056203  248084 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-225140 && echo "old-k8s-version-225140" | sudo tee /etc/hostname
	I1031 00:13:23.184296  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-225140
	
	I1031 00:13:23.184356  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.187270  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187720  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.187761  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187895  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.188085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188228  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188340  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.188565  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.189104  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.189135  248084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-225140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-225140/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-225140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:13:23.315792  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:13:23.315829  248084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:13:23.315893  248084 buildroot.go:174] setting up certificates
	I1031 00:13:23.315906  248084 provision.go:83] configureAuth start
	I1031 00:13:23.315921  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.316224  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.319690  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320111  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.320143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320315  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.322897  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323334  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.323362  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323720  248084 provision.go:138] copyHostCerts
	I1031 00:13:23.323803  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:13:23.323820  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:13:23.323895  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:13:23.324025  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:13:23.324043  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:13:23.324080  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:13:23.324257  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:13:23.324272  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:13:23.324313  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:13:23.324415  248084 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-225140 san=[192.168.72.65 192.168.72.65 localhost 127.0.0.1 minikube old-k8s-version-225140]
	I1031 00:13:23.580836  248084 provision.go:172] copyRemoteCerts
	I1031 00:13:23.580905  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:23.580929  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.584088  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584527  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.584576  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584872  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.585115  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.585290  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.585440  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:23.680241  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1031 00:13:23.706003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:13:23.730993  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:23.760873  248084 provision.go:86] duration metric: configureAuth took 444.934236ms
	I1031 00:13:23.760909  248084 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:23.761208  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:13:23.761370  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.764798  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.765273  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765411  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.765646  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.765868  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.766036  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.766256  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.766762  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.766796  248084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:24.109914  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:24.109946  248084 machine.go:91] provisioned docker machine in 1.058882555s
	I1031 00:13:24.109958  248084 start.go:300] post-start starting for "old-k8s-version-225140" (driver="kvm2")
	I1031 00:13:24.109972  248084 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:24.109994  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.110392  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:24.110456  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.113825  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114298  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.114335  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114587  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.114814  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.114989  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.115148  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.206997  248084 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:24.211439  248084 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:24.211467  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:24.211551  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:24.211635  248084 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:24.211722  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:24.219976  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:24.246337  248084 start.go:303] post-start completed in 136.360652ms
	I1031 00:13:24.246366  248084 fix.go:56] fixHost completed within 23.427336969s
	I1031 00:13:24.246389  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.249547  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.249876  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.249919  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.250099  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.250300  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250603  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250815  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.251022  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:24.251387  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:24.251413  248084 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:24.366477  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711204.302770779
	
	I1031 00:13:24.366499  248084 fix.go:206] guest clock: 1698711204.302770779
	I1031 00:13:24.366507  248084 fix.go:219] Guest: 2023-10-31 00:13:24.302770779 +0000 UTC Remote: 2023-10-31 00:13:24.246369619 +0000 UTC m=+368.452785688 (delta=56.40116ms)
	I1031 00:13:24.366558  248084 fix.go:190] guest clock delta is within tolerance: 56.40116ms
	I1031 00:13:24.366570  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 23.547580429s
	I1031 00:13:24.366599  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.366871  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:24.369640  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.369985  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.370032  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.370155  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370695  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370910  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370996  248084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:24.371044  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.371205  248084 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:24.371233  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.373962  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374315  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374349  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374379  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374621  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.374759  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374796  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.374822  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374952  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375018  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.375140  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.375139  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.375278  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375383  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.490387  248084 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:24.497758  248084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:24.645967  248084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:24.652716  248084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:24.652795  248084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:24.668415  248084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:24.668446  248084 start.go:472] detecting cgroup driver to use...
	I1031 00:13:24.668513  248084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:24.683255  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:24.697242  248084 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:24.697295  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:24.710554  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:24.725562  248084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:24.847447  248084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:24.982382  248084 docker.go:214] disabling docker service ...
	I1031 00:13:24.982477  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:24.998270  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:25.011136  248084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:25.129421  248084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:25.258387  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:25.271528  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:25.291702  248084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1031 00:13:25.291788  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.301762  248084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:25.301826  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.311900  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.322111  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.331429  248084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:25.344907  248084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:25.354397  248084 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:25.354463  248084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:25.367335  248084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:25.376415  248084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:25.493551  248084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:25.677504  248084 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:25.677648  248084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:25.683882  248084 start.go:540] Will wait 60s for crictl version
	I1031 00:13:25.683952  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:25.687748  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:25.729230  248084 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:25.729316  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.782619  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.832400  248084 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1031 00:13:25.833898  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:25.836924  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837347  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:25.837372  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837666  248084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:25.841940  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:24.051460  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.554325  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.499116  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.499157  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:26.499172  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:26.509898  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.509929  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:27.010543  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.024054  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.024104  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:27.510303  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.518621  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.518658  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:28.010147  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:28.017834  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:13:28.027903  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:28.028005  249055 api_server.go:131] duration metric: took 4.972421145s to wait for apiserver health ...
	I1031 00:13:28.028033  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:28.028070  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:28.030427  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:28.032020  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:28.042889  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:28.084357  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:28.114368  249055 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:28.114416  249055 system_pods.go:61] "coredns-5dd5756b68-6sbs7" [4cf52749-359c-42b7-a985-d2cdc3f20700] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:28.114430  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [75c06d7d-877d-4df8-9805-0ea50aec938f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:28.114440  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [6eb1d4f8-0594-4992-962c-383062853ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:28.114460  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [8b5e8ab9-34fe-4337-95d1-554adbd23505] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:28.114470  249055 system_pods.go:61] "kube-proxy-jn2j8" [23f4d9d7-61a0-43d9-a815-a4ce10a568e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:28.114479  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [dcb7e68d-4e3d-4e46-935a-1372309ad89c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:28.114488  249055 system_pods.go:61] "metrics-server-57f55c9bc5-7klqw" [3f832e2c-81b4-431e-b1a2-987057fdae0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:28.114502  249055 system_pods.go:61] "storage-provisioner" [b912cf02-280b-47e0-8e72-fd22566a40f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:28.114515  249055 system_pods.go:74] duration metric: took 30.127265ms to wait for pod list to return data ...
	I1031 00:13:28.114534  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:28.126920  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:28.126971  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:28.127018  249055 node_conditions.go:105] duration metric: took 12.476154ms to run NodePressure ...
	I1031 00:13:28.127048  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:28.402286  249055 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407352  249055 kubeadm.go:787] kubelet initialised
	I1031 00:13:28.407384  249055 kubeadm.go:788] duration metric: took 5.069821ms waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407397  249055 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:28.413100  249055 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:26.174532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:28.667350  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:25.856078  248084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1031 00:13:25.856136  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:25.913612  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:25.913733  248084 ssh_runner.go:195] Run: which lz4
	I1031 00:13:25.918632  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:25.923981  248084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:25.924014  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1031 00:13:27.712494  248084 crio.go:444] Took 1.793896 seconds to copy over tarball
	I1031 00:13:27.712615  248084 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:29.050835  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.549536  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.457173  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.255838  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.667667  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.167250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.207204  248084 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.494544747s)
	I1031 00:13:31.207238  248084 crio.go:451] Took 3.494710 seconds to extract the tarball
	I1031 00:13:31.207250  248084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:31.253648  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:31.312599  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:31.312624  248084 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:13:31.312719  248084 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.312753  248084 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.312763  248084 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.312776  248084 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1031 00:13:31.312705  248084 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.313005  248084 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.313122  248084 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.312926  248084 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314301  248084 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314408  248084 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.314826  248084 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.314863  248084 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.314835  248084 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.314877  248084 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.314888  248084 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.314904  248084 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1031 00:13:31.492117  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.493373  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.506179  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.506237  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1031 00:13:31.510547  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.515827  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.524137  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.614442  248084 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1031 00:13:31.614494  248084 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.614544  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.622661  248084 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1031 00:13:31.622718  248084 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.622770  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.630473  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.674058  248084 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1031 00:13:31.674111  248084 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.674161  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.707251  248084 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1031 00:13:31.707293  248084 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1031 00:13:31.707337  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1031 00:13:31.719006  248084 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.719008  248084 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1031 00:13:31.719056  248084 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.719072  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719084  248084 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.719111  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719119  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.719139  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719176  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.866787  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.866815  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1031 00:13:31.866818  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.866883  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1031 00:13:31.866887  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.866936  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1031 00:13:31.867046  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.993265  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1031 00:13:31.993505  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1031 00:13:31.993999  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1031 00:13:31.994045  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1031 00:13:31.994063  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1031 00:13:31.994123  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999020  248084 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1031 00:13:31.999034  248084 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999068  248084 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1031 00:13:33.460498  248084 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461402246s)
	I1031 00:13:33.460530  248084 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1031 00:13:33.460582  248084 cache_images.go:92] LoadImages completed in 2.147945804s
	W1031 00:13:33.460661  248084 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1031 00:13:33.460749  248084 ssh_runner.go:195] Run: crio config
	I1031 00:13:33.528812  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:33.528838  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:33.528865  248084 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:33.528895  248084 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.65 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-225140 NodeName:old-k8s-version-225140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1031 00:13:33.529103  248084 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-225140"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-225140
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.65:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:33.529205  248084 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-225140 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:13:33.529276  248084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1031 00:13:33.539328  248084 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:33.539424  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:33.551543  248084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1031 00:13:33.569095  248084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:33.586561  248084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1031 00:13:33.605084  248084 ssh_runner.go:195] Run: grep 192.168.72.65	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:33.609322  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:33.623527  248084 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140 for IP: 192.168.72.65
	I1031 00:13:33.623556  248084 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:33.623768  248084 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:33.623817  248084 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:33.623919  248084 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.key
	I1031 00:13:33.624000  248084 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key.fa85241c
	I1031 00:13:33.624074  248084 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key
	I1031 00:13:33.624223  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:33.624267  248084 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:33.624285  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:33.624333  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:33.624377  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:33.624409  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:33.624480  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:33.625311  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:33.648457  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:33.673383  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:33.701679  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:13:33.725823  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:33.748912  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:33.777397  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:33.803003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:33.827749  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:33.850011  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:33.871722  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:33.894663  248084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:33.912130  248084 ssh_runner.go:195] Run: openssl version
	I1031 00:13:33.918010  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:33.928381  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933548  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933605  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.939344  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:33.950844  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:33.962585  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968178  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968244  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.975606  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:33.986565  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:33.998188  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.003940  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.004012  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.010088  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:34.022223  248084 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:34.028537  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:34.036319  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:34.043481  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:34.051269  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:34.058129  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:34.065473  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:34.072663  248084 kubeadm.go:404] StartCluster: {Name:old-k8s-version-225140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:34.072781  248084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:34.072830  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:34.121758  248084 cri.go:89] found id: ""
	I1031 00:13:34.121848  248084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:34.135357  248084 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:34.135392  248084 kubeadm.go:636] restartCluster start
	I1031 00:13:34.135469  248084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:34.145173  248084 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.146905  248084 kubeconfig.go:92] found "old-k8s-version-225140" server: "https://192.168.72.65:8443"
	I1031 00:13:34.150660  248084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:34.163037  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.163119  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.184414  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.184441  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.184586  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.197787  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.698120  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.698246  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.710874  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.198312  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.198384  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.210933  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.698108  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.698210  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.710184  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:33.551354  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.048781  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.442171  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.941322  249055 pod_ready.go:92] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:36.941344  249055 pod_ready.go:81] duration metric: took 8.528221711s waiting for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:36.941353  249055 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:38.959679  249055 pod_ready.go:102] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.168250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:37.666699  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.198699  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.198787  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.211005  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:36.698612  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.698705  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.712106  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.198674  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.198779  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.211665  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.698160  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.698258  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.709798  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.198294  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.198410  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.210400  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.697965  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.698058  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.710188  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.198306  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.198435  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.210213  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.698867  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.698944  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.709958  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.198113  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.198217  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.209265  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.698424  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.698494  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.715194  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.548167  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.047378  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:39.959598  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.959625  249055 pod_ready.go:81] duration metric: took 3.018261782s waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.959638  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965182  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.965204  249055 pod_ready.go:81] duration metric: took 5.558563ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965218  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970258  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.970283  249055 pod_ready.go:81] duration metric: took 5.058027ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970293  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975183  249055 pod_ready.go:92] pod "kube-proxy-jn2j8" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.975202  249055 pod_ready.go:81] duration metric: took 4.903272ms waiting for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975209  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137875  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:40.137907  249055 pod_ready.go:81] duration metric: took 162.69035ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137921  249055 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:42.452793  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:40.167385  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:42.666396  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.198534  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.198640  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.210412  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:41.698420  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.698526  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.710324  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.198572  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.198649  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.210399  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.697932  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.698010  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.711010  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.198096  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.198182  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.209468  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.698864  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.698998  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.710735  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:44.163493  248084 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:44.163545  248084 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:44.163560  248084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:44.163621  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:44.204352  248084 cri.go:89] found id: ""
	I1031 00:13:44.204444  248084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:44.219641  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:44.228342  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:44.228420  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237058  248084 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237081  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:44.369926  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.077715  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.306025  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.399572  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.537955  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:45.538046  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:45.554284  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:43.549424  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.052253  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:44.947118  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.954020  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:45.167622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:47.669895  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.073056  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:46.572408  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.072392  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.098617  248084 api_server.go:72] duration metric: took 1.560662194s to wait for apiserver process to appear ...
	I1031 00:13:47.098650  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:47.098673  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:48.547476  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:50.547537  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:49.446620  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:51.946346  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:53.949089  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.098997  248084 api_server.go:269] stopped: https://192.168.72.65:8443/healthz: Get "https://192.168.72.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1031 00:13:52.099073  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:52.709441  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:52.709490  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:53.210178  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.216374  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.216403  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:53.709935  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.717326  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.717361  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:54.209883  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:54.215985  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:13:54.224088  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:13:54.224115  248084 api_server.go:131] duration metric: took 7.125456227s to wait for apiserver health ...
	I1031 00:13:54.224127  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:54.224135  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:54.226152  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:50.168563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.669900  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.227723  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:54.239709  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:54.261391  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:54.273728  248084 system_pods.go:59] 7 kube-system pods found
	I1031 00:13:54.273761  248084 system_pods.go:61] "coredns-5644d7b6d9-2s6pc" [c77d23a4-28d0-4bbf-bb28-baff23fc4987] Running
	I1031 00:13:54.273775  248084 system_pods.go:61] "etcd-old-k8s-version-225140" [dcc629ce-f107-4d14-b69b-20228b00b7c5] Running
	I1031 00:13:54.273783  248084 system_pods.go:61] "kube-apiserver-old-k8s-version-225140" [38fd683e-51fa-40f0-a3c6-afdf57e14132] Running
	I1031 00:13:54.273791  248084 system_pods.go:61] "kube-controller-manager-old-k8s-version-225140" [29b1b9cb-1819-497e-b0f9-c008b0ac6e26] Running
	I1031 00:13:54.273803  248084 system_pods.go:61] "kube-proxy-fxz8t" [57ccd26e-cbcf-4ed3-adbe-778fd8bcf27c] Running
	I1031 00:13:54.273811  248084 system_pods.go:61] "kube-scheduler-old-k8s-version-225140" [d8d4d75c-25f8-4485-853c-8fa75105c6e2] Running
	I1031 00:13:54.273818  248084 system_pods.go:61] "storage-provisioner" [8fc76055-6a96-4884-8f91-b2d3f598bc88] Running
	I1031 00:13:54.273826  248084 system_pods.go:74] duration metric: took 12.417629ms to wait for pod list to return data ...
	I1031 00:13:54.273840  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:54.279056  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:54.279082  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:54.279094  248084 node_conditions.go:105] duration metric: took 5.248504ms to run NodePressure ...
	I1031 00:13:54.279111  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:54.594257  248084 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:54.600279  248084 retry.go:31] will retry after 287.663167ms: kubelet not initialised
	I1031 00:13:54.899142  248084 retry.go:31] will retry after 297.826066ms: kubelet not initialised
	I1031 00:13:55.205347  248084 retry.go:31] will retry after 797.709551ms: kubelet not initialised
	I1031 00:13:52.548142  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.548667  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.047942  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.446395  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:58.946167  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:55.167909  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.668179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:59.668339  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.009099  248084 retry.go:31] will retry after 571.448668ms: kubelet not initialised
	I1031 00:13:56.593388  248084 retry.go:31] will retry after 1.82270665s: kubelet not initialised
	I1031 00:13:58.421789  248084 retry.go:31] will retry after 1.094040234s: kubelet not initialised
	I1031 00:13:59.522021  248084 retry.go:31] will retry after 3.716569913s: kubelet not initialised
	I1031 00:13:59.549278  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.551103  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.446913  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.947203  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.668422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.668478  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.244381  248084 retry.go:31] will retry after 4.104024564s: kubelet not initialised
	I1031 00:14:04.048498  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.548070  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.447864  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.945886  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.166653  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.167008  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:07.354371  248084 retry.go:31] will retry after 9.18347873s: kubelet not initialised
	I1031 00:14:09.047421  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.048479  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.448689  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.948268  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:10.667348  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:12.667812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.052934  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.547846  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.446625  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:18.447872  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.167259  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:17.666670  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:19.667251  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.544997  248084 retry.go:31] will retry after 8.29261189s: kubelet not initialised
	I1031 00:14:17.550692  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.045758  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:22.047516  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.946805  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:23.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:21.667436  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.167210  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.843011  248084 retry.go:31] will retry after 15.309414425s: kubelet not initialised
	I1031 00:14:24.048197  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.546847  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:25.946796  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:27.950212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.167443  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.168482  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.548116  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:31.047187  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.446164  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.451487  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.666762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.667234  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:33.049216  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.545964  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:34.946961  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:36.947212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:38.949437  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.167751  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:37.668981  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:39.669233  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.157618  248084 kubeadm.go:787] kubelet initialised
	I1031 00:14:40.157647  248084 kubeadm.go:788] duration metric: took 45.563360213s waiting for restarted kubelet to initialise ...
	I1031 00:14:40.157660  248084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:14:40.163372  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169776  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.169798  248084 pod_ready.go:81] duration metric: took 6.398827ms waiting for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169806  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175023  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.175047  248084 pod_ready.go:81] duration metric: took 5.233827ms waiting for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175058  248084 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179248  248084 pod_ready.go:92] pod "etcd-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.179269  248084 pod_ready.go:81] duration metric: took 4.202967ms waiting for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179279  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183579  248084 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.183593  248084 pod_ready.go:81] duration metric: took 4.308627ms waiting for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183604  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558275  248084 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.558308  248084 pod_ready.go:81] duration metric: took 374.694908ms waiting for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558321  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:37.547289  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.047586  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:41.446752  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:43.447874  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.166207  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:44.167277  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.958069  248084 pod_ready.go:92] pod "kube-proxy-fxz8t" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.958099  248084 pod_ready.go:81] duration metric: took 399.768399ms waiting for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.958112  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358244  248084 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:41.358274  248084 pod_ready.go:81] duration metric: took 400.15381ms waiting for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358284  248084 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:43.666594  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.666948  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.547950  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.047306  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.946510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.946663  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:46.167952  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.667854  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.166448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.167022  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.547211  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:49.548100  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.548509  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.446801  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.447233  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.168676  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.667170  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.666608  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.667583  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.550528  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:56.050177  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.947677  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.447082  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:55.669616  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.170640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.165612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.168165  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.548441  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.047296  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.447626  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.947292  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:00.669772  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.665706  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.166609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.546708  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.547092  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.447672  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.449541  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:08.948333  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.667422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.669173  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.666325  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.165998  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.547133  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.547568  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.551676  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.946673  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:10.168209  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:12.666973  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.668147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.166824  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.665410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.046068  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.047803  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:15.946975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.445704  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:17.167480  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:19.668157  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.165876  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.166620  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.666455  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.549666  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:21.046823  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.447212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.947109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.167144  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.168041  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.667076  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.167164  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:23.047419  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.049728  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.947312  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.449246  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:26.669861  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.168519  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.666465  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.166123  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.547889  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.046604  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.048045  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.948497  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.446948  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:31.670479  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.167604  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.668009  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:35.165749  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.547533  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.048031  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.945337  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.947811  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.168180  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:38.170343  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.168053  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.665709  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.552108  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.047262  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.451699  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.946296  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:40.667428  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.668235  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.666624  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.166672  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.047729  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.549442  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.447109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.448250  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:48.947017  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:45.167138  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:47.666886  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.667907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.669428  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.166194  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.047526  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.049047  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:50.947410  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.446734  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:52.167771  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:54.167875  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.666228  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.667295  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.052036  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.547767  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.946776  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.446825  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.668562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:59.168110  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.167716  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.665487  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.668666  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.047770  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.047908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.048356  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.946590  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.947001  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:01.667160  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.167375  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:03.165171  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.166289  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.049788  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.547020  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.446511  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.449772  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.667622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:08.667665  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.166410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.166536  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.049966  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.547967  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.947975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:12.447789  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.168645  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667838  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.665962  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667117  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:15.667752  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.047716  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.048052  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.947264  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.947386  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.167045  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.668483  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:17.669275  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.167079  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.548369  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.548635  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:19.448662  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.947615  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.167164  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.167506  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:22.666820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.166614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.046392  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.548954  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:24.446814  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:26.945792  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:28.947133  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.167732  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.168662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.171362  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.169221  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.667206  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.550807  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:30.048391  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.448249  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.946336  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.667185  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.667628  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.165207  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:34.166237  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.546558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.046558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:37.047654  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.946896  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.449959  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.668366  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.168509  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:36.166529  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.666448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:39.552154  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.046335  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.946962  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.446383  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.666758  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.668031  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:41.168643  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.170216  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.666959  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:44.046908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:46.548312  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.947573  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.947914  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.166562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667578  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667903  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.166574  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.046763  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:51.047566  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.948510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.446760  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.168646  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.667122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.668132  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.168815  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.667713  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:53.546751  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:56.048217  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.947315  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.447727  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.169330  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.666819  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.166002  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.168109  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:58.548212  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.047033  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.448330  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.946970  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.667755  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.666457  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167186  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:03.546842  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.547488  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.445743  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:06.446624  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.451015  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.644115  248387 pod_ready.go:81] duration metric: took 4m0.000125657s waiting for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:05.644148  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:05.644168  248387 pod_ready.go:38] duration metric: took 4m9.241022532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:05.644198  248387 kubeadm.go:640] restartCluster took 4m28.058055798s
	W1031 00:17:05.644570  248387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:05.644685  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:06.168910  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.666612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.047998  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.547186  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.946940  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.455539  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:11.168678  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.667122  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.046682  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.240656  248718 pod_ready.go:81] duration metric: took 4m0.001083426s waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:13.240702  248718 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:13.240712  248718 pod_ready.go:38] duration metric: took 4m0.801552437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:13.240732  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:17:13.240766  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:13.240930  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:13.307072  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.307099  248718 cri.go:89] found id: ""
	I1031 00:17:13.307108  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:13.307180  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.312997  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:13.313067  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:13.364439  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:13.364474  248718 cri.go:89] found id: ""
	I1031 00:17:13.364485  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:13.364561  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.370120  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:13.370186  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:13.413937  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.413972  248718 cri.go:89] found id: ""
	I1031 00:17:13.413983  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:13.414051  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.420586  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:13.420669  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:13.476980  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:13.477008  248718 cri.go:89] found id: ""
	I1031 00:17:13.477028  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:13.477100  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.482874  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:13.482957  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:13.532196  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.532232  248718 cri.go:89] found id: ""
	I1031 00:17:13.532244  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:13.532314  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.539868  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:13.540017  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:13.595189  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:13.595218  248718 cri.go:89] found id: ""
	I1031 00:17:13.595231  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:13.595305  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.601429  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:13.601496  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:13.641957  248718 cri.go:89] found id: ""
	I1031 00:17:13.641984  248718 logs.go:284] 0 containers: []
	W1031 00:17:13.641992  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:13.641998  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:13.642053  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:13.683163  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.683193  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:13.683200  248718 cri.go:89] found id: ""
	I1031 00:17:13.683209  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:13.683266  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.689222  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.693814  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:13.693839  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:13.710167  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:13.710188  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.754241  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:13.754273  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.800473  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:13.800508  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.857072  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:13.857101  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.901072  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:13.901102  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:14.390850  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:14.390894  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:14.446107  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:14.446141  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:14.495337  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:14.495368  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:14.535558  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:14.535591  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:14.589637  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:14.589676  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:14.650509  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:14.650559  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:14.816331  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:14.816362  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.363336  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:17:17.378105  248718 api_server.go:72] duration metric: took 4m12.292425365s to wait for apiserver process to appear ...
	I1031 00:17:17.378131  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:17:17.378171  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:17.378234  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:17.424054  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:17.424082  248718 cri.go:89] found id: ""
	I1031 00:17:17.424091  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:17.424152  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.428185  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:17.428246  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:17.465132  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:17.465157  248718 cri.go:89] found id: ""
	I1031 00:17:17.465167  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:17.465219  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.469315  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:17.469392  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:17.504119  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:17.504140  248718 cri.go:89] found id: ""
	I1031 00:17:17.504151  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:17.504199  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:15.946464  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:17.949398  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:19.822838  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.178119551s)
	I1031 00:17:19.822927  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:19.838182  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:19.847738  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:19.857883  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:19.857939  248387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:19.911372  248387 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:19.911432  248387 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:20.091412  248387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:20.091582  248387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:20.091703  248387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:20.351519  248387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:16.166533  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:18.668258  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:20.353310  248387 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:20.353500  248387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:20.353598  248387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:20.353712  248387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:20.353809  248387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:20.353933  248387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:20.354050  248387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:20.354132  248387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:20.354241  248387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:20.354353  248387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:20.354596  248387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:20.355193  248387 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:20.355332  248387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:21.009329  248387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:21.145431  248387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:21.231013  248387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:21.384423  248387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:21.385066  248387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:21.387895  248387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:17.508240  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:17.510213  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:17.548666  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:17.548692  248718 cri.go:89] found id: ""
	I1031 00:17:17.548702  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:17.548768  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.552963  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:17.553029  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:17.593690  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:17.593728  248718 cri.go:89] found id: ""
	I1031 00:17:17.593739  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:17.593808  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.598269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:17.598325  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:17.637723  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:17.637750  248718 cri.go:89] found id: ""
	I1031 00:17:17.637761  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:17.637826  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.642006  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:17.642055  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:17.686659  248718 cri.go:89] found id: ""
	I1031 00:17:17.686687  248718 logs.go:284] 0 containers: []
	W1031 00:17:17.686695  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:17.686701  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:17.686766  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:17.732114  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:17.732147  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.732154  248718 cri.go:89] found id: ""
	I1031 00:17:17.732163  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:17.732232  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.737308  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.741981  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:17.742013  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:18.181024  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:18.181062  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:18.196483  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:18.196519  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:18.235422  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:18.235458  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:18.291366  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:18.291402  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:18.412906  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:18.412960  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:18.469631  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:18.469669  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:18.523997  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:18.524034  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:18.566490  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:18.566520  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:18.626106  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:18.626138  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:18.666341  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:18.666382  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:18.729380  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:18.729430  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:18.788148  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:18.788182  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.330782  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:17:21.338085  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:17:21.339623  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:17:21.339671  248718 api_server.go:131] duration metric: took 3.961531332s to wait for apiserver health ...
	I1031 00:17:21.339684  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:17:21.339718  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:21.339786  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:21.380659  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:21.380687  248718 cri.go:89] found id: ""
	I1031 00:17:21.380696  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:21.380760  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.385559  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:21.385626  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:21.431810  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:21.431841  248718 cri.go:89] found id: ""
	I1031 00:17:21.431851  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:21.431914  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.436489  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:21.436562  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:21.489003  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.489036  248718 cri.go:89] found id: ""
	I1031 00:17:21.489047  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:21.489109  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.493691  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:21.493765  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:21.533480  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:21.533507  248718 cri.go:89] found id: ""
	I1031 00:17:21.533518  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:21.533584  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.538269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:21.538358  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:21.589588  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:21.589621  248718 cri.go:89] found id: ""
	I1031 00:17:21.589632  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:21.589705  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.595927  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:21.596020  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:21.644705  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:21.644730  248718 cri.go:89] found id: ""
	I1031 00:17:21.644738  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:21.644797  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.649696  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:21.649762  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:21.696655  248718 cri.go:89] found id: ""
	I1031 00:17:21.696692  248718 logs.go:284] 0 containers: []
	W1031 00:17:21.696703  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:21.696711  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:21.696788  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:21.743499  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.743523  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:21.743528  248718 cri.go:89] found id: ""
	I1031 00:17:21.743535  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:21.743586  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.748625  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.753187  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:21.753223  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:21.768074  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:21.768115  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:21.913742  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:21.913782  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.966345  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:21.966394  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:22.004823  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:22.004857  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:22.059117  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:22.059147  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:22.117615  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:22.117655  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:22.160231  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:22.160275  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:20.445730  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:22.447412  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:21.390006  248387 out.go:204]   - Booting up control plane ...
	I1031 00:17:21.390170  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:21.390275  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:21.391130  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:21.408062  248387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:21.409190  248387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:21.409256  248387 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:21.565150  248387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:22.536881  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:22.536920  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:22.591993  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:22.592030  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:22.644262  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:22.644302  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:22.688848  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:22.688880  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:22.740390  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:22.740440  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:25.317640  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:17:25.317675  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.317682  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.317690  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.317696  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.317702  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.317709  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.317718  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.317728  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.317737  248718 system_pods.go:74] duration metric: took 3.978040466s to wait for pod list to return data ...
	I1031 00:17:25.317752  248718 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:17:25.320120  248718 default_sa.go:45] found service account: "default"
	I1031 00:17:25.320147  248718 default_sa.go:55] duration metric: took 2.387709ms for default service account to be created ...
	I1031 00:17:25.320156  248718 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:17:25.325979  248718 system_pods.go:86] 8 kube-system pods found
	I1031 00:17:25.326004  248718 system_pods.go:89] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.326009  248718 system_pods.go:89] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.326014  248718 system_pods.go:89] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.326018  248718 system_pods.go:89] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.326022  248718 system_pods.go:89] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.326025  248718 system_pods.go:89] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.326055  248718 system_pods.go:89] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.326079  248718 system_pods.go:89] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.326088  248718 system_pods.go:126] duration metric: took 5.92719ms to wait for k8s-apps to be running ...
	I1031 00:17:25.326097  248718 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:17:25.326148  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:25.342753  248718 system_svc.go:56] duration metric: took 16.646026ms WaitForService to wait for kubelet.
	I1031 00:17:25.342775  248718 kubeadm.go:581] duration metric: took 4m20.257105243s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:17:25.342793  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:17:25.348257  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:17:25.348315  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:17:25.348379  248718 node_conditions.go:105] duration metric: took 5.579398ms to run NodePressure ...
	I1031 00:17:25.348413  248718 start.go:228] waiting for startup goroutines ...
	I1031 00:17:25.348426  248718 start.go:233] waiting for cluster config update ...
	I1031 00:17:25.348440  248718 start.go:242] writing updated cluster config ...
	I1031 00:17:25.349022  248718 ssh_runner.go:195] Run: rm -f paused
	I1031 00:17:25.415112  248718 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:17:25.418179  248718 out.go:177] * Done! kubectl is now configured to use "embed-certs-078843" cluster and "default" namespace by default
	I1031 00:17:21.166338  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:23.666609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:24.447530  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:26.947352  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:29.570822  248387 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004974 seconds
	I1031 00:17:29.570964  248387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:17:29.587033  248387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:17:30.119470  248387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:17:30.119696  248387 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-640155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:17:30.635312  248387 kubeadm.go:322] [bootstrap-token] Using token: cwaa4b.bqwxrocs0j7ngn44
	I1031 00:17:26.166271  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:28.664576  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.664963  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.636717  248387 out.go:204]   - Configuring RBAC rules ...
	I1031 00:17:30.636873  248387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:17:30.642895  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:17:30.651729  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:17:30.655472  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:17:30.659228  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:17:30.668748  248387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:17:30.690255  248387 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:17:30.950445  248387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:17:31.051453  248387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:17:31.051475  248387 kubeadm.go:322] 
	I1031 00:17:31.051536  248387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:17:31.051583  248387 kubeadm.go:322] 
	I1031 00:17:31.051709  248387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:17:31.051728  248387 kubeadm.go:322] 
	I1031 00:17:31.051767  248387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:17:31.051843  248387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:17:31.051930  248387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:17:31.051943  248387 kubeadm.go:322] 
	I1031 00:17:31.052013  248387 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:17:31.052024  248387 kubeadm.go:322] 
	I1031 00:17:31.052104  248387 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:17:31.052130  248387 kubeadm.go:322] 
	I1031 00:17:31.052191  248387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:17:31.052280  248387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:17:31.052375  248387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:17:31.052383  248387 kubeadm.go:322] 
	I1031 00:17:31.052485  248387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:17:31.052578  248387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:17:31.052612  248387 kubeadm.go:322] 
	I1031 00:17:31.052744  248387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.052900  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:17:31.052957  248387 kubeadm.go:322] 	--control-plane 
	I1031 00:17:31.052969  248387 kubeadm.go:322] 
	I1031 00:17:31.053092  248387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:17:31.053107  248387 kubeadm.go:322] 
	I1031 00:17:31.053217  248387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.053359  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:17:31.053517  248387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:17:31.053540  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:17:31.053552  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:17:31.055477  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:17:29.447694  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.449117  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:33.947759  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.056845  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:17:31.095104  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:17:31.131198  248387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:17:31.131322  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.131337  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=no-preload-640155 minikube.k8s.io/updated_at=2023_10_31T00_17_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.581951  248387 ops.go:34] apiserver oom_adj: -16
	I1031 00:17:31.582010  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.741330  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.350182  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.850643  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.350205  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.850216  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.349583  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.666281  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.168579  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:36.449644  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:38.946898  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.350661  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:35.850301  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.349673  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.849749  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.349755  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.850628  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.350204  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.849697  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.350194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.850027  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.667083  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.166305  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.349747  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:40.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.350476  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.850214  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.350555  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.850295  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.350645  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.679529  248387 kubeadm.go:1081] duration metric: took 12.548274555s to wait for elevateKubeSystemPrivileges.
	I1031 00:17:43.679561  248387 kubeadm.go:406] StartCluster complete in 5m6.156207823s
	I1031 00:17:43.679585  248387 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.679674  248387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:17:43.682045  248387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.684483  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:17:43.684785  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:17:43.684856  248387 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:17:43.684927  248387 addons.go:69] Setting storage-provisioner=true in profile "no-preload-640155"
	I1031 00:17:43.685036  248387 addons.go:231] Setting addon storage-provisioner=true in "no-preload-640155"
	W1031 00:17:43.685063  248387 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:17:43.685159  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685323  248387 addons.go:69] Setting metrics-server=true in profile "no-preload-640155"
	I1031 00:17:43.685339  248387 addons.go:231] Setting addon metrics-server=true in "no-preload-640155"
	W1031 00:17:43.685356  248387 addons.go:240] addon metrics-server should already be in state true
	I1031 00:17:43.685395  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685653  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685706  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.685893  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685978  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.686168  248387 addons.go:69] Setting default-storageclass=true in profile "no-preload-640155"
	I1031 00:17:43.686191  248387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-640155"
	I1031 00:17:43.686545  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.686651  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.705002  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1031 00:17:43.705181  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1031 00:17:43.705556  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706410  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706515  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.706543  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.706893  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I1031 00:17:43.706968  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.707139  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.707141  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.707157  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.707503  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.708166  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.708183  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.708236  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.708752  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.708783  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.709044  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.709715  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.709762  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.711511  248387 addons.go:231] Setting addon default-storageclass=true in "no-preload-640155"
	W1031 00:17:43.711525  248387 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:17:43.711553  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.711887  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.711927  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.730687  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1031 00:17:43.731513  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.732184  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.732205  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.732737  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.733201  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.734567  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I1031 00:17:43.734708  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I1031 00:17:43.735166  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.735665  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.735687  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.736245  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.736325  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.736490  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.736559  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.737461  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.739478  248387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:17:43.737480  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.738913  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.741138  248387 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.741154  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:17:43.741176  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.742564  248387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:17:43.741663  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.744300  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:17:43.744312  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:17:43.744326  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.744413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.745065  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.745106  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.753076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753082  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753110  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753196  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753200  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753235  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753249  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753282  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753376  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753469  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753527  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753624  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.753739  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.770481  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I1031 00:17:43.770925  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.773191  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.773223  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.773636  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.773840  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.775633  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.775954  248387 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:43.775969  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:17:43.775988  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.778552  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.778797  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.778823  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.779021  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.779204  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.779386  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.779683  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.936171  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.958064  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:17:43.958098  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:17:43.967116  248387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-640155" context rescaled to 1 replicas
	I1031 00:17:43.967170  248387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:17:43.969408  248387 out.go:177] * Verifying Kubernetes components...
	I1031 00:17:40.138062  249055 pod_ready.go:81] duration metric: took 4m0.000119587s waiting for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:40.138098  249055 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:40.138122  249055 pod_ready.go:38] duration metric: took 4m11.730710605s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:40.138164  249055 kubeadm.go:640] restartCluster took 4m31.295508075s
	W1031 00:17:40.138262  249055 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:40.138297  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:43.970897  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:43.997796  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:44.038710  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:17:44.038738  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:17:44.075299  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:44.075333  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:17:44.084795  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:17:44.172770  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:42.670020  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:45.165914  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:46.365906  248387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.39492875s)
	I1031 00:17:46.365968  248387 node_ready.go:35] waiting up to 6m0s for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.365998  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.368158747s)
	I1031 00:17:46.366066  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366074  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.281185782s)
	I1031 00:17:46.366103  248387 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1031 00:17:46.366086  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366354  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.430149836s)
	I1031 00:17:46.366390  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366402  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366600  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366612  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366622  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366631  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366682  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.366732  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366742  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366751  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366761  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.368921  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.368922  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.368958  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.369248  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.369293  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.369307  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.375988  248387 node_ready.go:49] node "no-preload-640155" has status "Ready":"True"
	I1031 00:17:46.376021  248387 node_ready.go:38] duration metric: took 10.036603ms waiting for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.376036  248387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:46.401563  248387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:46.425939  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.253121961s)
	I1031 00:17:46.426019  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.426035  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427461  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427471  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427488  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427498  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.427508  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427894  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427943  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427954  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427971  248387 addons.go:467] Verifying addon metrics-server=true in "no-preload-640155"
	I1031 00:17:46.436605  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.436630  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.436927  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.436959  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.436987  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.438529  248387 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1031 00:17:46.439869  248387 addons.go:502] enable addons completed in 2.755015847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1031 00:17:48.527903  248387 pod_ready.go:92] pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.527939  248387 pod_ready.go:81] duration metric: took 2.126335033s waiting for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.527954  248387 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544043  248387 pod_ready.go:92] pod "etcd-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.544070  248387 pod_ready.go:81] duration metric: took 16.106665ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544085  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552043  248387 pod_ready.go:92] pod "kube-apiserver-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.552075  248387 pod_ready.go:81] duration metric: took 7.981099ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552092  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563073  248387 pod_ready.go:92] pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.563112  248387 pod_ready.go:81] duration metric: took 11.009619ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563128  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771051  248387 pod_ready.go:92] pod "kube-proxy-pkjsl" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.771080  248387 pod_ready.go:81] duration metric: took 207.944354ms waiting for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771090  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170323  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:49.170354  248387 pod_ready.go:81] duration metric: took 399.25516ms waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170369  248387 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:47.166417  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:49.665614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:51.479213  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.979583  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:54.802281  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.663950968s)
	I1031 00:17:54.802401  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:54.818228  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:54.829802  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:54.841203  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:54.841254  249055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:54.900359  249055 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:54.900453  249055 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:55.068403  249055 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:55.068563  249055 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:55.068676  249055 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:55.316737  249055 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:51.665839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.666626  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:55.319016  249055 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:55.319172  249055 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:55.319275  249055 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:55.319395  249055 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:55.319481  249055 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:55.319603  249055 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:55.320419  249055 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:55.320814  249055 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:55.321700  249055 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:55.322211  249055 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:55.322708  249055 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:55.323252  249055 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:55.323344  249055 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:55.388450  249055 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:55.461692  249055 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:55.807861  249055 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:55.963028  249055 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:55.963510  249055 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:55.966001  249055 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:55.967951  249055 out.go:204]   - Booting up control plane ...
	I1031 00:17:55.968125  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:55.968238  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:55.968343  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:55.989357  249055 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:55.990439  249055 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:55.990548  249055 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:56.126548  249055 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:56.479126  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.479232  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:56.166722  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.667319  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:00.980893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.481571  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:04.629984  249055 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502923 seconds
	I1031 00:18:04.630137  249055 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:04.643529  249055 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:05.178336  249055 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:05.178549  249055 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-892233 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:18:05.695447  249055 kubeadm.go:322] [bootstrap-token] Using token: g00nr2.87o2mnv2u0jwf81d
	I1031 00:18:01.165232  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.166303  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.664899  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.696918  249055 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:05.697075  249055 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:05.706237  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:18:05.720767  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:05.731239  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:05.736130  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:05.740949  249055 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:05.759998  249055 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:18:06.051798  249055 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:06.118986  249055 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:06.119014  249055 kubeadm.go:322] 
	I1031 00:18:06.119078  249055 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:06.119084  249055 kubeadm.go:322] 
	I1031 00:18:06.119179  249055 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:06.119190  249055 kubeadm.go:322] 
	I1031 00:18:06.119225  249055 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:06.119282  249055 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:06.119326  249055 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:06.119332  249055 kubeadm.go:322] 
	I1031 00:18:06.119376  249055 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:18:06.119382  249055 kubeadm.go:322] 
	I1031 00:18:06.119424  249055 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:18:06.119435  249055 kubeadm.go:322] 
	I1031 00:18:06.119484  249055 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:06.119551  249055 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:06.119677  249055 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:06.119703  249055 kubeadm.go:322] 
	I1031 00:18:06.119830  249055 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:18:06.119938  249055 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:06.119957  249055 kubeadm.go:322] 
	I1031 00:18:06.120024  249055 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120179  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:06.120208  249055 kubeadm.go:322] 	--control-plane 
	I1031 00:18:06.120219  249055 kubeadm.go:322] 
	I1031 00:18:06.120330  249055 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:06.120368  249055 kubeadm.go:322] 
	I1031 00:18:06.120468  249055 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120559  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:06.121091  249055 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:06.121119  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:18:06.121127  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:06.123073  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:06.124566  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:06.140064  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:06.171195  249055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:06.171343  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.171359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=default-k8s-diff-port-892233 minikube.k8s.io/updated_at=2023_10_31T00_18_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.256957  249055 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:06.637700  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.769942  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.383359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.883621  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.384017  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.883751  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:05.979125  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.979280  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.981296  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.666495  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:10.165765  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.383896  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:09.883523  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.384077  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.883546  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.383417  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.883493  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.384043  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.884000  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.383479  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.884100  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.479614  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.978890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:12.666054  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:15.163419  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.384001  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:14.884297  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.383607  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.883617  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.383591  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.884141  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.384112  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.884196  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.384156  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.883687  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:19.114222  249055 kubeadm.go:1081] duration metric: took 12.942949327s to wait for elevateKubeSystemPrivileges.
	I1031 00:18:19.114261  249055 kubeadm.go:406] StartCluster complete in 5m10.335188993s
	I1031 00:18:19.114295  249055 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.114401  249055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:18:19.116632  249055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.116971  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:18:19.117107  249055 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:18:19.117188  249055 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117202  249055 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117221  249055 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117231  249055 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:19.117239  249055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-892233"
	W1031 00:18:19.117243  249055 addons.go:240] addon metrics-server should already be in state true
	I1031 00:18:19.117265  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:18:19.117305  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117213  249055 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.117326  249055 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:18:19.117372  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117740  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117746  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117761  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117830  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.134384  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I1031 00:18:19.134426  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I1031 00:18:19.134810  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.134915  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.135437  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135461  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.135648  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135675  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.136018  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136074  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136578  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.136625  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.137167  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.137198  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.144184  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I1031 00:18:19.144763  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.145263  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.145293  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.145648  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.145852  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.152132  249055 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.152194  249055 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:18:19.152240  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.152775  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.152867  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.154334  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I1031 00:18:19.155862  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1031 00:18:19.157267  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.158677  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.158735  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.158863  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.164983  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.165014  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.165044  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166267  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166284  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.169122  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.169199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.174627  249055 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:18:19.170934  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.176219  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:18:19.177591  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:18:19.177619  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.179052  249055 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:18:19.176693  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45785
	I1031 00:18:19.178184  249055 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-892233" context rescaled to 1 replicas
	I1031 00:18:19.179171  249055 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:18:19.181526  249055 out.go:177] * Verifying Kubernetes components...
	I1031 00:18:19.182930  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:16.980163  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:18.981179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:17.165555  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.174245  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.181603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.184667  249055 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.184676  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.184683  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:18:19.184698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.179546  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.184702  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.182398  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.184914  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.185097  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.185743  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.185761  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.185827  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.186516  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.187946  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.187988  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.188014  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.188359  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.188374  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.188549  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.188757  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.189003  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.189160  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.203564  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1031 00:18:19.203935  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.204374  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.204399  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.204741  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.204994  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.207012  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.207266  249055 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.207283  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:18:19.207302  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.209950  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210314  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.210332  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210507  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.210701  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.210830  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.210962  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.423829  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:18:19.423852  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:18:19.440581  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.466961  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.511517  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:18:19.511543  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:18:19.591560  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.591588  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:18:19.628414  249055 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.628560  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:18:19.648329  249055 node_ready.go:49] node "default-k8s-diff-port-892233" has status "Ready":"True"
	I1031 00:18:19.648353  249055 node_ready.go:38] duration metric: took 19.904402ms waiting for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.648364  249055 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:19.658333  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.692147  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.904902  249055 pod_ready.go:102] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:22.104924  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.637923019s)
	I1031 00:18:22.104999  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.104997  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.664373813s)
	I1031 00:18:22.105008  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476413511s)
	I1031 00:18:22.105035  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105013  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105052  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105035  249055 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 00:18:22.105350  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105366  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105376  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105388  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105479  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Closing plugin on server side
	I1031 00:18:22.105541  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105554  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105573  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105594  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105821  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105852  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105860  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105870  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.146205  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.146231  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.146598  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.146631  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.219948  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.561551335s)
	I1031 00:18:22.220017  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220033  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220412  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220441  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220459  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220474  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220820  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220840  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220853  249055 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:22.222793  249055 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:18:22.224194  249055 addons.go:502] enable addons completed in 3.107083845s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:18:22.880805  249055 pod_ready.go:92] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:22.880840  249055 pod_ready.go:81] duration metric: took 3.18866819s waiting for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:22.880853  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912036  249055 pod_ready.go:92] pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.912066  249055 pod_ready.go:81] duration metric: took 1.031204489s waiting for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912079  249055 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918589  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.918609  249055 pod_ready.go:81] duration metric: took 6.523247ms waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918619  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925040  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.925059  249055 pod_ready.go:81] duration metric: took 6.434141ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925067  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073002  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.073029  249055 pod_ready.go:81] duration metric: took 147.953037ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073044  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.478451  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.479849  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:24.473158  249055 pod_ready.go:92] pod "kube-proxy-77gzz" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.473184  249055 pod_ready.go:81] duration metric: took 400.13282ms waiting for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.473194  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873506  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.873528  249055 pod_ready.go:81] duration metric: took 400.328112ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873538  249055 pod_ready.go:38] duration metric: took 5.225163782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:24.873558  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:18:24.873617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:18:24.890474  249055 api_server.go:72] duration metric: took 5.711236569s to wait for apiserver process to appear ...
	I1031 00:18:24.890508  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:18:24.890533  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:18:24.896826  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:18:24.898203  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:18:24.898226  249055 api_server.go:131] duration metric: took 7.708512ms to wait for apiserver health ...
	I1031 00:18:24.898234  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:18:25.076806  249055 system_pods.go:59] 9 kube-system pods found
	I1031 00:18:25.076835  249055 system_pods.go:61] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.076840  249055 system_pods.go:61] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.076845  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.076850  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.076854  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.076857  249055 system_pods.go:61] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.076861  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.076868  249055 system_pods.go:61] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.076874  249055 system_pods.go:61] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.076882  249055 system_pods.go:74] duration metric: took 178.64211ms to wait for pod list to return data ...
	I1031 00:18:25.076889  249055 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:18:25.272531  249055 default_sa.go:45] found service account: "default"
	I1031 00:18:25.272557  249055 default_sa.go:55] duration metric: took 195.662215ms for default service account to be created ...
	I1031 00:18:25.272567  249055 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:18:25.477225  249055 system_pods.go:86] 9 kube-system pods found
	I1031 00:18:25.477258  249055 system_pods.go:89] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.477266  249055 system_pods.go:89] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.477275  249055 system_pods.go:89] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.477282  249055 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.477292  249055 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.477298  249055 system_pods.go:89] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.477309  249055 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.477323  249055 system_pods.go:89] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.477333  249055 system_pods.go:89] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.477343  249055 system_pods.go:126] duration metric: took 204.769317ms to wait for k8s-apps to be running ...
	I1031 00:18:25.477356  249055 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:18:25.477416  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:25.494054  249055 system_svc.go:56] duration metric: took 16.688482ms WaitForService to wait for kubelet.
	I1031 00:18:25.494079  249055 kubeadm.go:581] duration metric: took 6.314858374s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:18:25.494097  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:18:25.673698  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:18:25.673729  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:18:25.673742  249055 node_conditions.go:105] duration metric: took 179.63938ms to run NodePressure ...
	I1031 00:18:25.673756  249055 start.go:228] waiting for startup goroutines ...
	I1031 00:18:25.673764  249055 start.go:233] waiting for cluster config update ...
	I1031 00:18:25.673778  249055 start.go:242] writing updated cluster config ...
	I1031 00:18:25.674107  249055 ssh_runner.go:195] Run: rm -f paused
	I1031 00:18:25.729477  249055 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:18:25.731433  249055 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-892233" cluster and "default" namespace by default
	I1031 00:18:21.666578  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.667065  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:25.980194  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:27.983361  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:26.166839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:28.664820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.665038  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.478938  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:32.980862  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:33.164907  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.165601  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.479491  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.979837  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.167604  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.665586  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.982368  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:44.476905  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.359122  248084 pod_ready.go:81] duration metric: took 4m0.000818862s waiting for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	E1031 00:18:41.359173  248084 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:18:41.359193  248084 pod_ready.go:38] duration metric: took 4m1.201522433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:41.359227  248084 kubeadm.go:640] restartCluster took 5m7.223824608s
	W1031 00:18:41.359305  248084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:18:41.359335  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:18:46.480820  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:48.487440  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:46.413914  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.054544075s)
	I1031 00:18:46.414001  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:46.427362  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:18:46.436557  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:18:46.444929  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:18:46.445010  248084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1031 00:18:46.659252  248084 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:50.978966  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:52.980133  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.061122  248084 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1031 00:18:59.061211  248084 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:18:59.061324  248084 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:18:59.061476  248084 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:18:59.061695  248084 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:18:59.061861  248084 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:18:59.061989  248084 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:18:59.062059  248084 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1031 00:18:59.062158  248084 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:18:59.063991  248084 out.go:204]   - Generating certificates and keys ...
	I1031 00:18:59.064091  248084 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:18:59.064178  248084 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:18:59.064261  248084 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:18:59.064320  248084 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:18:59.064400  248084 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:18:59.064478  248084 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:18:59.064590  248084 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:18:59.064687  248084 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:18:59.064777  248084 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:18:59.064884  248084 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:18:59.064967  248084 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:18:59.065056  248084 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:18:59.065123  248084 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:18:59.065199  248084 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:18:59.065284  248084 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:18:59.065375  248084 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:18:59.065483  248084 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:18:59.067362  248084 out.go:204]   - Booting up control plane ...
	I1031 00:18:59.067477  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:18:59.067584  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:18:59.067655  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:18:59.067761  248084 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:18:59.067952  248084 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:18:59.068089  248084 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004306 seconds
	I1031 00:18:59.068174  248084 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:59.068330  248084 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:59.068419  248084 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:59.068536  248084 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-225140 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1031 00:18:59.068585  248084 kubeadm.go:322] [bootstrap-token] Using token: 1g4jse.zc5opkcf3va44z15
	I1031 00:18:59.070040  248084 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:59.070142  248084 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:59.070305  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:59.070451  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:59.070569  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:59.070657  248084 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:59.070700  248084 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:59.070742  248084 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:59.070748  248084 kubeadm.go:322] 
	I1031 00:18:59.070799  248084 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:59.070809  248084 kubeadm.go:322] 
	I1031 00:18:59.070900  248084 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:59.070912  248084 kubeadm.go:322] 
	I1031 00:18:59.070933  248084 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:59.070983  248084 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:59.071030  248084 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:59.071035  248084 kubeadm.go:322] 
	I1031 00:18:59.071082  248084 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:59.071158  248084 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:59.071269  248084 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:59.071278  248084 kubeadm.go:322] 
	I1031 00:18:59.071392  248084 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1031 00:18:59.071498  248084 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:59.071509  248084 kubeadm.go:322] 
	I1031 00:18:59.071608  248084 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.071749  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:59.071783  248084 kubeadm.go:322]     --control-plane 	  
	I1031 00:18:59.071793  248084 kubeadm.go:322] 
	I1031 00:18:59.071899  248084 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:59.071912  248084 kubeadm.go:322] 
	I1031 00:18:59.072051  248084 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.072196  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:59.072228  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:18:59.072243  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:59.073949  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:55.479295  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:57.983131  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.075900  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:59.087288  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:59.112130  248084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:59.112241  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.112258  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=old-k8s-version-225140 minikube.k8s.io/updated_at=2023_10_31T00_18_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.144297  248084 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:59.352655  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.464268  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.069316  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.569382  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.481532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:02.978563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:01.069124  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:01.569535  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.069209  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.569292  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.069280  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.569469  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.069050  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.569082  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.068795  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.569625  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.479444  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:07.980592  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:09.982873  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:06.069318  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:06.569043  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.069599  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.569098  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.069690  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.569668  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.069735  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.569294  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.069080  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.569441  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.068991  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.569543  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.069495  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.568757  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.069012  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.569638  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.789009  248084 kubeadm.go:1081] duration metric: took 14.676828073s to wait for elevateKubeSystemPrivileges.
	I1031 00:19:13.789061  248084 kubeadm.go:406] StartCluster complete in 5m39.716410778s
	I1031 00:19:13.789090  248084 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.789209  248084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:19:13.791883  248084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.792204  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:19:13.792368  248084 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:19:13.792451  248084 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792457  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:19:13.792471  248084 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-225140"
	W1031 00:19:13.792480  248084 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:19:13.792485  248084 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792515  248084 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792531  248084 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:13.792534  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	W1031 00:19:13.792540  248084 addons.go:240] addon metrics-server should already be in state true
	I1031 00:19:13.792568  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.792516  248084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-225140"
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793021  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793104  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793147  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793254  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.811115  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I1031 00:19:13.811377  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I1031 00:19:13.811793  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.811913  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.812411  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812433  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812586  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812636  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812764  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.812833  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35585
	I1031 00:19:13.813035  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.813186  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.813284  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.813624  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.813649  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.813896  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.813938  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.813984  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.814742  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.814791  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.817328  248084 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-225140"
	W1031 00:19:13.817352  248084 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:19:13.817383  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.817651  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.817676  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.831410  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1031 00:19:13.832059  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.832665  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.832686  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.833071  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.833396  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.834672  248084 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-225140" context rescaled to 1 replicas
	I1031 00:19:13.834715  248084 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:19:13.837043  248084 out.go:177] * Verifying Kubernetes components...
	I1031 00:19:13.834927  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1031 00:19:13.835269  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.835504  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I1031 00:19:13.837823  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.838827  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:19:13.840427  248084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:19:13.838307  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.839305  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.842067  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.842200  248084 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:13.842220  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:19:13.842259  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.842518  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.843110  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.843159  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.843539  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.843577  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.844178  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.844488  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.846259  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.846704  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.848811  248084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:19:12.479334  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:14.484105  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:13.847143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.847192  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.850295  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.850300  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:19:13.850319  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:19:13.850341  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.850537  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.850712  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.851115  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.853651  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854192  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.854226  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854563  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.854758  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.854967  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.855112  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.862473  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I1031 00:19:13.862970  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.863496  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.863526  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.864026  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.864257  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.866270  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.866530  248084 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:13.866546  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:19:13.866565  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.870580  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.870992  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.871028  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.871142  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.871372  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.871542  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.871678  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:14.034938  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:14.040988  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:19:14.041016  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:19:14.061666  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:14.111727  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:19:14.111758  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:19:14.125610  248084 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.125707  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:19:14.165369  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:14.165397  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:19:14.193366  248084 node_ready.go:49] node "old-k8s-version-225140" has status "Ready":"True"
	I1031 00:19:14.193389  248084 node_ready.go:38] duration metric: took 67.750717ms waiting for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.193401  248084 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:14.207505  248084 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:14.276613  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:15.572065  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.537074399s)
	I1031 00:19:15.572136  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572152  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572177  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.510470973s)
	I1031 00:19:15.572219  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572238  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572336  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.446596481s)
	I1031 00:19:15.572363  248084 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1031 00:19:15.572603  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572621  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572632  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572642  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572697  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572711  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572757  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572778  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572756  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572908  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572910  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572970  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.573533  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.573554  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586186  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.586210  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.586507  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.586530  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586546  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.700772  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.424096792s)
	I1031 00:19:15.700835  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.700851  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701196  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701217  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701230  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.701242  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701531  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.701561  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701574  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701585  248084 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:15.703404  248084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:19:15.704856  248084 addons.go:502] enable addons completed in 1.91251063s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:19:16.980629  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:19.478989  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:16.278623  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:18.779192  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.978882  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.981260  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.276797  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.277531  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.277561  248084 pod_ready.go:81] duration metric: took 9.070020963s waiting for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.277575  248084 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283345  248084 pod_ready.go:92] pod "kube-proxy-v2pp4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.283367  248084 pod_ready.go:81] duration metric: took 5.78532ms waiting for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283374  248084 pod_ready.go:38] duration metric: took 9.089964646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:23.283394  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:19:23.283452  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:19:23.300275  248084 api_server.go:72] duration metric: took 9.465522842s to wait for apiserver process to appear ...
	I1031 00:19:23.300294  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:19:23.300308  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:19:23.309064  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:19:23.310485  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:19:23.310508  248084 api_server.go:131] duration metric: took 10.207384ms to wait for apiserver health ...
	I1031 00:19:23.310517  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:19:23.314181  248084 system_pods.go:59] 4 kube-system pods found
	I1031 00:19:23.314205  248084 system_pods.go:61] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.314210  248084 system_pods.go:61] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.314217  248084 system_pods.go:61] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.314224  248084 system_pods.go:61] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.314230  248084 system_pods.go:74] duration metric: took 3.706807ms to wait for pod list to return data ...
	I1031 00:19:23.314236  248084 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:19:23.316411  248084 default_sa.go:45] found service account: "default"
	I1031 00:19:23.316435  248084 default_sa.go:55] duration metric: took 2.192647ms for default service account to be created ...
	I1031 00:19:23.316443  248084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:19:23.320111  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.320137  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.320148  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.320159  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.320167  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.320190  248084 retry.go:31] will retry after 199.965979ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.524726  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.524754  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.524760  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.524766  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.524773  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.524788  248084 retry.go:31] will retry after 276.623866ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.807038  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.807066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.807072  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.807080  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.807087  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.807104  248084 retry.go:31] will retry after 316.245952ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.128239  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.128268  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.128277  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.128287  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.128297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.128326  248084 retry.go:31] will retry after 483.558456ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.616454  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.616486  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.616494  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.616505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.616514  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.616534  248084 retry.go:31] will retry after 700.807178ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:25.323617  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:25.323666  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:25.323675  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:25.323687  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:25.323697  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:25.323718  248084 retry.go:31] will retry after 768.27646ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:26.485923  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:28.978283  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:26.097257  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:26.097283  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:26.097288  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:26.097295  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:26.097302  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:26.097320  248084 retry.go:31] will retry after 1.004884505s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:27.108295  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:27.108330  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:27.108339  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:27.108350  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:27.108360  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:27.108380  248084 retry.go:31] will retry after 1.256932803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:28.369629  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:28.369668  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:28.369677  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:28.369688  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:28.369698  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:28.369722  248084 retry.go:31] will retry after 1.554545012s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:29.930268  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:29.930295  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:29.930314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:29.930322  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:29.930338  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:29.930358  248084 retry.go:31] will retry after 1.794325328s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:30.981402  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:33.478794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:31.729473  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:31.729511  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:31.729520  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:31.729531  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:31.729542  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:31.729563  248084 retry.go:31] will retry after 2.111450847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:33.846759  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:33.846787  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:33.846792  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:33.846801  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:33.846807  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:33.846824  248084 retry.go:31] will retry after 2.198886772s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:35.981890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:38.478284  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:36.050460  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:36.050491  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:36.050496  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:36.050505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:36.050512  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:36.050530  248084 retry.go:31] will retry after 3.361148685s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:39.417603  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:39.417633  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:39.417640  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:39.417651  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:39.417660  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:39.417680  248084 retry.go:31] will retry after 4.41093106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:40.978990  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.479103  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.834041  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:43.834083  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:43.834093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:43.834104  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:43.834115  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:43.834134  248084 retry.go:31] will retry after 5.294476287s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:45.482986  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:47.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.980183  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.133233  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:49.133264  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:49.133269  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:49.133276  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:49.133284  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:49.133300  248084 retry.go:31] will retry after 7.429511286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:51.980355  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:53.981222  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.480456  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:58.979640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.567247  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:56.567278  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:56.567284  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:56.567290  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:56.567297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:56.567314  248084 retry.go:31] will retry after 10.944177906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:01.477606  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:03.481220  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:05.979560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.984688  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.518274  248084 system_pods.go:86] 7 kube-system pods found
	I1031 00:20:07.518300  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:07.518306  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Pending
	I1031 00:20:07.518310  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Pending
	I1031 00:20:07.518314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:07.518318  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Pending
	I1031 00:20:07.518325  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:07.518331  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:07.518349  248084 retry.go:31] will retry after 8.381829497s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:10.485015  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:12.978647  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.479489  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:17.980834  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.906034  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:15.906066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:15.906074  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Pending
	I1031 00:20:15.906080  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:15.906087  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:15.906093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:15.906100  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:15.906109  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:15.906120  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:15.906138  248084 retry.go:31] will retry after 11.167332732s: missing components: etcd
	I1031 00:20:20.481147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:22.980858  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:24.982265  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:27.080224  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:27.080263  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:27.080272  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Running
	I1031 00:20:27.080279  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:27.080287  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:27.080294  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:27.080301  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:27.080318  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:27.080332  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:27.080343  248084 system_pods.go:126] duration metric: took 1m3.763892339s to wait for k8s-apps to be running ...
	I1031 00:20:27.080357  248084 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:20:27.080408  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:20:27.098039  248084 system_svc.go:56] duration metric: took 17.670849ms WaitForService to wait for kubelet.
	I1031 00:20:27.098075  248084 kubeadm.go:581] duration metric: took 1m13.263332949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:20:27.098105  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:20:27.101093  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:20:27.101126  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:20:27.101182  248084 node_conditions.go:105] duration metric: took 3.066191ms to run NodePressure ...
	I1031 00:20:27.101198  248084 start.go:228] waiting for startup goroutines ...
	I1031 00:20:27.101208  248084 start.go:233] waiting for cluster config update ...
	I1031 00:20:27.101222  248084 start.go:242] writing updated cluster config ...
	I1031 00:20:27.101586  248084 ssh_runner.go:195] Run: rm -f paused
	I1031 00:20:27.157211  248084 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1031 00:20:27.159327  248084 out.go:177] 
	W1031 00:20:27.160872  248084 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1031 00:20:27.163644  248084 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1031 00:20:27.165443  248084 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-225140" cluster and "default" namespace by default
	I1031 00:20:27.481582  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:29.978812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:32.478965  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:34.479052  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:36.486487  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:38.981098  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:41.478500  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:43.478933  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:45.978794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:47.978937  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:49.980825  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:52.479268  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:54.978422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:57.478476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:59.478602  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:01.478639  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:03.479969  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:05.978907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:08.478656  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:10.978877  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:12.981683  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:15.479094  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:17.978893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:20.479878  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:22.483287  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:24.978077  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:26.979122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:28.981476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:31.478577  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:33.479816  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:35.979787  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:37.981859  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:40.477762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:42.479382  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:44.479508  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:46.479851  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:48.482610  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:49.171002  248387 pod_ready.go:81] duration metric: took 4m0.000595541s waiting for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	E1031 00:21:49.171048  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:21:49.171063  248387 pod_ready.go:38] duration metric: took 4m2.795014386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:21:49.171097  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:21:49.171149  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:21:49.171248  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:21:49.226512  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.226543  248387 cri.go:89] found id: ""
	I1031 00:21:49.226555  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:21:49.226647  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.230993  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:21:49.231060  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:21:49.270646  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:49.270677  248387 cri.go:89] found id: ""
	I1031 00:21:49.270688  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:21:49.270760  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.275165  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:21:49.275225  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:21:49.317730  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:49.317757  248387 cri.go:89] found id: ""
	I1031 00:21:49.317768  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:21:49.317818  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.322362  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:21:49.322430  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:21:49.361430  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.361462  248387 cri.go:89] found id: ""
	I1031 00:21:49.361474  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:21:49.361535  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.365642  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:21:49.365713  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:21:49.409230  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:49.409258  248387 cri.go:89] found id: ""
	I1031 00:21:49.409269  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:21:49.409329  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.413540  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:21:49.413622  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:21:49.458477  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:49.458506  248387 cri.go:89] found id: ""
	I1031 00:21:49.458518  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:21:49.458586  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.462471  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:21:49.462540  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:21:49.498272  248387 cri.go:89] found id: ""
	I1031 00:21:49.498299  248387 logs.go:284] 0 containers: []
	W1031 00:21:49.498309  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:21:49.498316  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:21:49.498386  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:21:49.538677  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.538704  248387 cri.go:89] found id: ""
	I1031 00:21:49.538714  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:21:49.538776  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.544293  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:21:49.544318  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:21:49.719505  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:21:49.719542  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.770108  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:21:49.770146  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.826250  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:21:49.826289  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.864212  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:21:49.864244  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:21:50.278307  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:21:50.278348  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:21:50.332860  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:21:50.332894  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:21:50.413002  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413224  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413368  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413524  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.435703  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:21:50.435739  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:21:50.451836  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:21:50.451865  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:50.493883  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:21:50.493912  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:50.533935  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:21:50.533967  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:50.582053  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:21:50.582094  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:50.638988  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639021  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:21:50.639177  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:21:50.639191  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639201  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639213  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639219  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.639225  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639232  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:00.639748  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:22:00.663810  248387 api_server.go:72] duration metric: took 4m16.69659563s to wait for apiserver process to appear ...
	I1031 00:22:00.663846  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:22:00.663904  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:00.663980  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:00.705584  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:00.705611  248387 cri.go:89] found id: ""
	I1031 00:22:00.705620  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:00.705672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.710031  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:00.710113  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:00.747821  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:00.747850  248387 cri.go:89] found id: ""
	I1031 00:22:00.747861  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:00.747926  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.752647  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:00.752733  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:00.802165  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:00.802200  248387 cri.go:89] found id: ""
	I1031 00:22:00.802210  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:00.802274  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.807367  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:00.807451  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:00.846633  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:00.846661  248387 cri.go:89] found id: ""
	I1031 00:22:00.846670  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:00.846736  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.851197  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:00.851282  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:00.891522  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:00.891549  248387 cri.go:89] found id: ""
	I1031 00:22:00.891559  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:00.891624  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.896269  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:00.896369  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:00.937565  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:00.937594  248387 cri.go:89] found id: ""
	I1031 00:22:00.937606  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:00.937672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.942205  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:00.942287  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:00.984788  248387 cri.go:89] found id: ""
	I1031 00:22:00.984814  248387 logs.go:284] 0 containers: []
	W1031 00:22:00.984821  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:00.984827  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:00.984883  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:01.032572  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.032601  248387 cri.go:89] found id: ""
	I1031 00:22:01.032621  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:01.032685  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:01.037253  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:01.037280  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:01.096027  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:01.096065  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:01.166608  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166786  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166925  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.167075  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:01.188441  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:01.188473  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:01.238925  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:01.238961  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:01.278987  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:01.279024  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:01.340249  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:01.340284  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:01.381155  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:01.381191  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.421808  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:01.421842  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:01.817836  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:01.817877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:01.832590  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:01.832620  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:01.961348  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:01.961384  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:02.023997  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:02.024055  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:02.087279  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087321  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:02.087437  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:02.087460  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087476  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087485  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087495  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:02.087513  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087527  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:12.090012  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:22:12.096458  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:22:12.097833  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:22:12.097860  248387 api_server.go:131] duration metric: took 11.434005759s to wait for apiserver health ...
	I1031 00:22:12.097872  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:22:12.097901  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:12.098004  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:12.161098  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.161129  248387 cri.go:89] found id: ""
	I1031 00:22:12.161140  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:12.161199  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.166236  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:12.166325  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:12.208793  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:12.208815  248387 cri.go:89] found id: ""
	I1031 00:22:12.208824  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:12.208871  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.213722  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:12.213791  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:12.256006  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.256036  248387 cri.go:89] found id: ""
	I1031 00:22:12.256046  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:12.256116  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.260468  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:12.260546  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:12.305580  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.305608  248387 cri.go:89] found id: ""
	I1031 00:22:12.305618  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:12.305687  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.313321  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:12.313390  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:12.359900  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.359928  248387 cri.go:89] found id: ""
	I1031 00:22:12.359939  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:12.360003  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.364087  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:12.364171  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:12.403635  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.403660  248387 cri.go:89] found id: ""
	I1031 00:22:12.403675  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:12.403743  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.408014  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:12.408087  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:12.449718  248387 cri.go:89] found id: ""
	I1031 00:22:12.449741  248387 logs.go:284] 0 containers: []
	W1031 00:22:12.449748  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:12.449753  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:12.449802  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:12.490301  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.490330  248387 cri.go:89] found id: ""
	I1031 00:22:12.490340  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:12.490396  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.495061  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:12.495125  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.537124  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:12.537163  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.597600  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:12.597642  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.637344  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:12.637385  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:12.691076  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:12.691107  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:12.820546  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:12.820578  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.871913  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:12.871953  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.914661  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:12.914705  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.965771  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:12.965810  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:13.352819  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:13.352862  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:13.424722  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.424906  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425062  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425220  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.447363  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:13.447393  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:13.462468  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:13.462502  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:13.507930  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.507960  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:13.508045  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:13.508060  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508072  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508084  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508097  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.508107  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.508114  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:23.516544  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:22:23.516574  248387 system_pods.go:61] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.516579  248387 system_pods.go:61] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.516584  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.516588  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.516592  248387 system_pods.go:61] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.516597  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.516604  248387 system_pods.go:61] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.516613  248387 system_pods.go:61] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.516620  248387 system_pods.go:74] duration metric: took 11.418741675s to wait for pod list to return data ...
	I1031 00:22:23.516630  248387 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:22:23.520026  248387 default_sa.go:45] found service account: "default"
	I1031 00:22:23.520050  248387 default_sa.go:55] duration metric: took 3.413856ms for default service account to be created ...
	I1031 00:22:23.520058  248387 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:22:23.526672  248387 system_pods.go:86] 8 kube-system pods found
	I1031 00:22:23.526704  248387 system_pods.go:89] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.526712  248387 system_pods.go:89] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.526719  248387 system_pods.go:89] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.526729  248387 system_pods.go:89] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.526736  248387 system_pods.go:89] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.526753  248387 system_pods.go:89] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.526765  248387 system_pods.go:89] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.526776  248387 system_pods.go:89] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.526789  248387 system_pods.go:126] duration metric: took 6.724214ms to wait for k8s-apps to be running ...
	I1031 00:22:23.526801  248387 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:22:23.526862  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:22:23.546006  248387 system_svc.go:56] duration metric: took 19.183151ms WaitForService to wait for kubelet.
	I1031 00:22:23.546038  248387 kubeadm.go:581] duration metric: took 4m39.57883274s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:22:23.546066  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:22:23.550930  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:22:23.550975  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:22:23.551004  248387 node_conditions.go:105] duration metric: took 4.930974ms to run NodePressure ...
	I1031 00:22:23.551041  248387 start.go:228] waiting for startup goroutines ...
	I1031 00:22:23.551053  248387 start.go:233] waiting for cluster config update ...
	I1031 00:22:23.551064  248387 start.go:242] writing updated cluster config ...
	I1031 00:22:23.551346  248387 ssh_runner.go:195] Run: rm -f paused
	I1031 00:22:23.603812  248387 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:22:23.605925  248387 out.go:177] * Done! kubectl is now configured to use "no-preload-640155" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 00:13:13 UTC, ends at Tue 2023-10-31 00:29:29 UTC. --
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.947984129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712168947965613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=bc94af05-8fec-4f32-aacf-522e09257814 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.948635110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e0a02ca5-158c-4da6-b0aa-64ac8aef04ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.948678058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e0a02ca5-158c-4da6-b0aa-64ac8aef04ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.948881058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b,PodSandboxId:1246bdda0a39d80178f654eadbe303e6eb499605f05298fbf1124a8c49427c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711556475982706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c4f0f-7367-4955-a3c1-2972ac938fcd,},Annotations:map[string]string{io.kubernetes.container.hash: 964889a,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06,PodSandboxId:14505ca26a429c2977493ec204cea4662864280d2f58a40936dca4b50aeb343b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698711555966763404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v2pp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b895cf-5155-458e-abf7-d890aa8bdb24,},Annotations:map[string]string{io.kubernetes.container.hash: fa9b7280,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779,PodSandboxId:f73265bdfb045aa1e48a0fa45c6f3f5237de14c31d459a21a46f44fd5dd75b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698711554882877203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v4lf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0399403f-e33d-4c8e-8420-c3c0e5c622c2,},Annotations:map[string]string{io.kubernetes.container.hash: 2807ee09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c,PodSandboxId:f229a50755adc4acc8b68706063b0745efae6f91c1fe2645c96686dacf5d67a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698711530359275319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f111d3056f4e1d7adaf55ddf5c5337f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d9ba4352,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475,PodSandboxId:2cf58aef9978b4b3d583849c4e7d138c1f0a6a1c9f99534cc106e8f2592ced86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698711528829931236,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52,PodSandboxId:35c012b3d849ae8eb3c439a853faa09e996cbd8cab2157abf7e6016fcb2ba3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698711528848418844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf9a2574e05b88952239bf0bd14806a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 642ee56e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871,PodSandboxId:44d3e09316994598c639f386b05bc5658953fb47908e9c5ce265e0d79fbd8b0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698711528802004516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map
[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e0a02ca5-158c-4da6-b0aa-64ac8aef04ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.990959073Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ecaef563-3c25-4b52-b732-c43a5d9dd49c name=/runtime.v1.RuntimeService/Version
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.991021244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ecaef563-3c25-4b52-b732-c43a5d9dd49c name=/runtime.v1.RuntimeService/Version
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.992400689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8a5479cd-8b27-4b77-bd30-0fc74e5820ba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.992772341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712168992761122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=8a5479cd-8b27-4b77-bd30-0fc74e5820ba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.993176336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=38d6f3d2-7649-4ac8-8c58-f1dc4c92bb79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.993279449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=38d6f3d2-7649-4ac8-8c58-f1dc4c92bb79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:28 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:28.993456672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b,PodSandboxId:1246bdda0a39d80178f654eadbe303e6eb499605f05298fbf1124a8c49427c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711556475982706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c4f0f-7367-4955-a3c1-2972ac938fcd,},Annotations:map[string]string{io.kubernetes.container.hash: 964889a,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06,PodSandboxId:14505ca26a429c2977493ec204cea4662864280d2f58a40936dca4b50aeb343b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698711555966763404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v2pp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b895cf-5155-458e-abf7-d890aa8bdb24,},Annotations:map[string]string{io.kubernetes.container.hash: fa9b7280,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779,PodSandboxId:f73265bdfb045aa1e48a0fa45c6f3f5237de14c31d459a21a46f44fd5dd75b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698711554882877203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v4lf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0399403f-e33d-4c8e-8420-c3c0e5c622c2,},Annotations:map[string]string{io.kubernetes.container.hash: 2807ee09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c,PodSandboxId:f229a50755adc4acc8b68706063b0745efae6f91c1fe2645c96686dacf5d67a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698711530359275319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f111d3056f4e1d7adaf55ddf5c5337f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d9ba4352,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475,PodSandboxId:2cf58aef9978b4b3d583849c4e7d138c1f0a6a1c9f99534cc106e8f2592ced86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698711528829931236,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52,PodSandboxId:35c012b3d849ae8eb3c439a853faa09e996cbd8cab2157abf7e6016fcb2ba3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698711528848418844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf9a2574e05b88952239bf0bd14806a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 642ee56e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871,PodSandboxId:44d3e09316994598c639f386b05bc5658953fb47908e9c5ce265e0d79fbd8b0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698711528802004516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map
[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=38d6f3d2-7649-4ac8-8c58-f1dc4c92bb79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.033341324Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=71321a50-eb29-4d72-95c9-cabe74402fa8 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.033413522Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=71321a50-eb29-4d72-95c9-cabe74402fa8 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.034788020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a83e6d89-f8d0-4129-9824-c2651dd88bc6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.035146813Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712169035135737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=a83e6d89-f8d0-4129-9824-c2651dd88bc6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.035653745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aae375ac-0ae0-4944-9183-e053c88ec7f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.035698054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aae375ac-0ae0-4944-9183-e053c88ec7f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.036048803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b,PodSandboxId:1246bdda0a39d80178f654eadbe303e6eb499605f05298fbf1124a8c49427c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711556475982706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c4f0f-7367-4955-a3c1-2972ac938fcd,},Annotations:map[string]string{io.kubernetes.container.hash: 964889a,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06,PodSandboxId:14505ca26a429c2977493ec204cea4662864280d2f58a40936dca4b50aeb343b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698711555966763404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v2pp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b895cf-5155-458e-abf7-d890aa8bdb24,},Annotations:map[string]string{io.kubernetes.container.hash: fa9b7280,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779,PodSandboxId:f73265bdfb045aa1e48a0fa45c6f3f5237de14c31d459a21a46f44fd5dd75b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698711554882877203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v4lf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0399403f-e33d-4c8e-8420-c3c0e5c622c2,},Annotations:map[string]string{io.kubernetes.container.hash: 2807ee09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c,PodSandboxId:f229a50755adc4acc8b68706063b0745efae6f91c1fe2645c96686dacf5d67a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698711530359275319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f111d3056f4e1d7adaf55ddf5c5337f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d9ba4352,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475,PodSandboxId:2cf58aef9978b4b3d583849c4e7d138c1f0a6a1c9f99534cc106e8f2592ced86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698711528829931236,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52,PodSandboxId:35c012b3d849ae8eb3c439a853faa09e996cbd8cab2157abf7e6016fcb2ba3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698711528848418844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf9a2574e05b88952239bf0bd14806a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 642ee56e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871,PodSandboxId:44d3e09316994598c639f386b05bc5658953fb47908e9c5ce265e0d79fbd8b0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698711528802004516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map
[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aae375ac-0ae0-4944-9183-e053c88ec7f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.076109224Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8fa12794-6775-4b87-a4fc-04f80195b566 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.076177002Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8fa12794-6775-4b87-a4fc-04f80195b566 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.078505014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d947747a-7379-4849-8f8d-1fae012f77cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.079045075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712169079021048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=d947747a-7379-4849-8f8d-1fae012f77cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.079939958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d2fe8e5-48e3-4936-8a73-33ee7137fc67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.079984281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d2fe8e5-48e3-4936-8a73-33ee7137fc67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:29:29 old-k8s-version-225140 crio[717]: time="2023-10-31 00:29:29.080128050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b,PodSandboxId:1246bdda0a39d80178f654eadbe303e6eb499605f05298fbf1124a8c49427c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711556475982706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c4f0f-7367-4955-a3c1-2972ac938fcd,},Annotations:map[string]string{io.kubernetes.container.hash: 964889a,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06,PodSandboxId:14505ca26a429c2977493ec204cea4662864280d2f58a40936dca4b50aeb343b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698711555966763404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v2pp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b895cf-5155-458e-abf7-d890aa8bdb24,},Annotations:map[string]string{io.kubernetes.container.hash: fa9b7280,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779,PodSandboxId:f73265bdfb045aa1e48a0fa45c6f3f5237de14c31d459a21a46f44fd5dd75b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698711554882877203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v4lf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0399403f-e33d-4c8e-8420-c3c0e5c622c2,},Annotations:map[string]string{io.kubernetes.container.hash: 2807ee09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c,PodSandboxId:f229a50755adc4acc8b68706063b0745efae6f91c1fe2645c96686dacf5d67a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698711530359275319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f111d3056f4e1d7adaf55ddf5c5337f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d9ba4352,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475,PodSandboxId:2cf58aef9978b4b3d583849c4e7d138c1f0a6a1c9f99534cc106e8f2592ced86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698711528829931236,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52,PodSandboxId:35c012b3d849ae8eb3c439a853faa09e996cbd8cab2157abf7e6016fcb2ba3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698711528848418844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf9a2574e05b88952239bf0bd14806a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 642ee56e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871,PodSandboxId:44d3e09316994598c639f386b05bc5658953fb47908e9c5ce265e0d79fbd8b0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698711528802004516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map
[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d2fe8e5-48e3-4936-8a73-33ee7137fc67 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b02ad2f08464       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   1246bdda0a39d       storage-provisioner
	d54fc00711e05       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   14505ca26a429       kube-proxy-v2pp4
	5191f89ace8c0       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   f73265bdfb045       coredns-5644d7b6d9-v4lf9
	ac12a0d51e792       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   f229a50755adc       etcd-old-k8s-version-225140
	2ef03c12b91aa       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   35c012b3d849a       kube-apiserver-old-k8s-version-225140
	cd07eb095fc37       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   2cf58aef9978b       kube-controller-manager-old-k8s-version-225140
	9e82a22e28885       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   44d3e09316994       kube-scheduler-old-k8s-version-225140
	
	* 
	* ==> coredns [5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779] <==
	* .:53
	2023-10-31T00:19:15.452Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-31T00:19:15.452Z [INFO] CoreDNS-1.6.2
	2023-10-31T00:19:15.452Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-31T00:19:49.147Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2023-10-31T00:19:49.155Z [INFO] 127.0.0.1:52265 - 16011 "HINFO IN 3183437691474010862.4888761306246306044. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00847492s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-225140
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-225140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=old-k8s-version-225140
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T00_18_59_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 00:18:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:28:54 +0000   Tue, 31 Oct 2023 00:18:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:28:54 +0000   Tue, 31 Oct 2023 00:18:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:28:54 +0000   Tue, 31 Oct 2023 00:18:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:28:54 +0000   Tue, 31 Oct 2023 00:18:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.65
	  Hostname:    old-k8s-version-225140
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 4c7d4d13a26248e28e74f239bcad1ca3
	 System UUID:                4c7d4d13-a262-48e2-8e74-f239bcad1ca3
	 Boot ID:                    a9e0c1a2-cd8b-46f5-84d2-b6651a70c64d
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-v4lf9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-225140                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                kube-apiserver-old-k8s-version-225140             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                kube-controller-manager-old-k8s-version-225140    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                kube-proxy-v2pp4                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-225140             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                metrics-server-74d5856cc6-hp8k4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-225140     Node old-k8s-version-225140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-225140     Node old-k8s-version-225140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-225140     Node old-k8s-version-225140 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-225140  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct31 00:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073944] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.935184] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.595781] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152881] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.493574] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.637788] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.129336] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.157002] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.123131] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.234741] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[ +19.799605] systemd-fstab-generator[1031]: Ignoring "noauto" for root device
	[  +0.442816] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct31 00:14] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.573148] kauditd_printk_skb: 2 callbacks suppressed
	[Oct31 00:18] systemd-fstab-generator[3138]: Ignoring "noauto" for root device
	[  +0.769444] kauditd_printk_skb: 6 callbacks suppressed
	[Oct31 00:19] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c] <==
	* 2023-10-31 00:18:50.517441 I | raft: b2b4141cc3075842 became follower at term 0
	2023-10-31 00:18:50.517461 I | raft: newRaft b2b4141cc3075842 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-31 00:18:50.517476 I | raft: b2b4141cc3075842 became follower at term 1
	2023-10-31 00:18:50.526871 W | auth: simple token is not cryptographically signed
	2023-10-31 00:18:50.531849 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-31 00:18:50.533165 I | etcdserver: b2b4141cc3075842 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-31 00:18:50.533626 I | etcdserver/membership: added member b2b4141cc3075842 [https://192.168.72.65:2380] to cluster 8411952e25aa5a8
	2023-10-31 00:18:50.534867 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-31 00:18:50.535128 I | embed: listening for metrics on http://192.168.72.65:2381
	2023-10-31 00:18:50.535276 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-31 00:18:50.818064 I | raft: b2b4141cc3075842 is starting a new election at term 1
	2023-10-31 00:18:50.818270 I | raft: b2b4141cc3075842 became candidate at term 2
	2023-10-31 00:18:50.818283 I | raft: b2b4141cc3075842 received MsgVoteResp from b2b4141cc3075842 at term 2
	2023-10-31 00:18:50.818292 I | raft: b2b4141cc3075842 became leader at term 2
	2023-10-31 00:18:50.818297 I | raft: raft.node: b2b4141cc3075842 elected leader b2b4141cc3075842 at term 2
	2023-10-31 00:18:50.818804 I | etcdserver: published {Name:old-k8s-version-225140 ClientURLs:[https://192.168.72.65:2379]} to cluster 8411952e25aa5a8
	2023-10-31 00:18:50.818869 I | embed: ready to serve client requests
	2023-10-31 00:18:50.819641 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-31 00:18:50.820469 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-31 00:18:50.820597 I | embed: ready to serve client requests
	2023-10-31 00:18:50.820762 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-31 00:18:50.820900 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-31 00:18:50.821734 I | embed: serving client requests on 192.168.72.65:2379
	2023-10-31 00:28:50.846656 I | mvcc: store.index: compact 647
	2023-10-31 00:28:50.849337 I | mvcc: finished scheduled compaction at 647 (took 1.620531ms)
	
	* 
	* ==> kernel <==
	*  00:29:29 up 16 min,  0 users,  load average: 0.08, 0.19, 0.22
	Linux old-k8s-version-225140 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52] <==
	* I1031 00:22:17.063581       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:22:17.064000       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:22:17.064104       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:22:17.064132       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:23:55.192792       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:23:55.192947       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:23:55.193025       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:23:55.193055       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:24:55.193508       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:24:55.193879       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:24:55.193946       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:24:55.193968       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:26:55.194668       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:26:55.194806       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:26:55.194867       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:26:55.194876       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:28:55.196384       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:28:55.196731       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:28:55.196865       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:28:55.196927       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475] <==
	* E1031 00:23:16.050431       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:23:30.125121       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:23:46.302712       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:24:02.127689       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:24:16.554631       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:24:34.129625       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:24:46.807360       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:25:06.131907       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:25:17.059303       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:25:38.134177       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:25:47.311323       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:26:10.136439       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:26:17.563620       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:26:42.138434       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:26:47.815689       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:27:14.140932       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:27:18.067551       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:27:46.143067       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:27:48.320333       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:28:18.145062       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:28:18.572827       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1031 00:28:48.824394       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:28:50.147461       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:29:19.076387       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:29:22.149838       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06] <==
	* W1031 00:19:16.241720       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1031 00:19:16.255057       1 node.go:135] Successfully retrieved node IP: 192.168.72.65
	I1031 00:19:16.255379       1 server_others.go:149] Using iptables Proxier.
	I1031 00:19:16.256391       1 server.go:529] Version: v1.16.0
	I1031 00:19:16.258630       1 config.go:313] Starting service config controller
	I1031 00:19:16.258679       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1031 00:19:16.258716       1 config.go:131] Starting endpoints config controller
	I1031 00:19:16.258725       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1031 00:19:16.365846       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1031 00:19:16.365943       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871] <==
	* W1031 00:18:54.190406       1 authentication.go:79] Authentication is disabled
	I1031 00:18:54.190421       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1031 00:18:54.190826       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1031 00:18:54.237335       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 00:18:54.247097       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 00:18:54.248533       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 00:18:54.249425       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:54.250879       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:18:54.250919       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 00:18:54.250961       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 00:18:54.250984       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 00:18:54.251039       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 00:18:54.251071       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 00:18:54.254600       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:55.239380       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 00:18:55.249921       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 00:18:55.259490       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:55.259952       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 00:18:55.261107       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:18:55.263157       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 00:18:55.264692       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 00:18:55.266524       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 00:18:55.267594       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 00:18:55.271037       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 00:18:55.271932       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 00:13:13 UTC, ends at Tue 2023-10-31 00:29:29 UTC. --
	Oct 31 00:24:56 old-k8s-version-225140 kubelet[3144]: E1031 00:24:56.706612    3144 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:24:56 old-k8s-version-225140 kubelet[3144]: E1031 00:24:56.706725    3144 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:24:56 old-k8s-version-225140 kubelet[3144]: E1031 00:24:56.706782    3144 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:24:56 old-k8s-version-225140 kubelet[3144]: E1031 00:24:56.706812    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Oct 31 00:25:09 old-k8s-version-225140 kubelet[3144]: E1031 00:25:09.670941    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:25:23 old-k8s-version-225140 kubelet[3144]: E1031 00:25:23.672083    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:25:36 old-k8s-version-225140 kubelet[3144]: E1031 00:25:36.670916    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:25:51 old-k8s-version-225140 kubelet[3144]: E1031 00:25:51.670738    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:26:06 old-k8s-version-225140 kubelet[3144]: E1031 00:26:06.670847    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:26:20 old-k8s-version-225140 kubelet[3144]: E1031 00:26:20.670079    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:26:32 old-k8s-version-225140 kubelet[3144]: E1031 00:26:32.670133    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:26:43 old-k8s-version-225140 kubelet[3144]: E1031 00:26:43.670709    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:26:56 old-k8s-version-225140 kubelet[3144]: E1031 00:26:56.670184    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:27:11 old-k8s-version-225140 kubelet[3144]: E1031 00:27:11.670027    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:27:23 old-k8s-version-225140 kubelet[3144]: E1031 00:27:23.670118    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:27:35 old-k8s-version-225140 kubelet[3144]: E1031 00:27:35.671927    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:27:50 old-k8s-version-225140 kubelet[3144]: E1031 00:27:50.670299    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:01 old-k8s-version-225140 kubelet[3144]: E1031 00:28:01.670281    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:14 old-k8s-version-225140 kubelet[3144]: E1031 00:28:14.670177    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:28 old-k8s-version-225140 kubelet[3144]: E1031 00:28:28.670860    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:40 old-k8s-version-225140 kubelet[3144]: E1031 00:28:40.670260    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:47 old-k8s-version-225140 kubelet[3144]: E1031 00:28:47.765508    3144 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Oct 31 00:28:53 old-k8s-version-225140 kubelet[3144]: E1031 00:28:53.671165    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:29:05 old-k8s-version-225140 kubelet[3144]: E1031 00:29:05.670048    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:29:19 old-k8s-version-225140 kubelet[3144]: E1031 00:29:19.671619    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b] <==
	* I1031 00:19:16.641666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 00:19:16.654394       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 00:19:16.654692       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 00:19:16.665081       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 00:19:16.665954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-225140_ce0dc0db-8787-4c4d-97f0-3234b29ab329!
	I1031 00:19:16.666680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c9cfe66-bee6-4ee2-a864-0a1337880c73", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-225140_ce0dc0db-8787-4c4d-97f0-3234b29ab329 became leader
	I1031 00:19:16.767851       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-225140_ce0dc0db-8787-4c4d-97f0-3234b29ab329!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-225140 -n old-k8s-version-225140
E1031 00:29:30.631167  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-225140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-hp8k4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-225140 describe pod metrics-server-74d5856cc6-hp8k4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-225140 describe pod metrics-server-74d5856cc6-hp8k4: exit status 1 (77.896143ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-hp8k4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-225140 describe pod metrics-server-74d5856cc6-hp8k4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1031 00:24:14.583489  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1031 00:24:30.631730  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1031 00:25:37.632157  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-640155 -n no-preload-640155
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-31 00:31:24.216610612 +0000 UTC m=+5386.466627533
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-640155 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-640155 logs -n 25: (1.692681599s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-options-344463                                 | cert-options-344463          | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:02 UTC |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-225140        | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-640155             | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:06 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-078843            | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221554 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | disable-driver-mounts-221554                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:07 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-225140             | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:20 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-892233  | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-640155                  | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:22 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-078843                 | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC | 31 Oct 23 00:17 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-892233       | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC | 31 Oct 23 00:18 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:09:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:09:59.171110  249055 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:09:59.171372  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171383  249055 out.go:309] Setting ErrFile to fd 2...
	I1031 00:09:59.171387  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171591  249055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:09:59.172151  249055 out.go:303] Setting JSON to false
	I1031 00:09:59.173091  249055 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28351,"bootTime":1698682648,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:09:59.173154  249055 start.go:138] virtualization: kvm guest
	I1031 00:09:59.175712  249055 out.go:177] * [default-k8s-diff-port-892233] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:09:59.177218  249055 notify.go:220] Checking for updates...
	I1031 00:09:59.177238  249055 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:09:59.178590  249055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:09:59.179936  249055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:09:59.181243  249055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:09:59.182619  249055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:09:59.184021  249055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:09:59.185755  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:09:59.186187  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.186242  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.200537  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I1031 00:09:59.201002  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.201576  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.201596  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.201949  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.202159  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.202362  249055 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:09:59.202635  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.202680  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.216197  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I1031 00:09:59.216575  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.216998  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.217027  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.217349  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.217537  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.250565  249055 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 00:09:59.251974  249055 start.go:298] selected driver: kvm2
	I1031 00:09:59.251988  249055 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.252123  249055 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:09:59.253132  249055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.253220  249055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:09:59.266948  249055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:09:59.267297  249055 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 00:09:59.267362  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:09:59.267383  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:09:59.267401  249055 start_flags.go:323] config:
	{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-89223
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.267557  249055 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.269225  249055 out.go:177] * Starting control plane node default-k8s-diff-port-892233 in cluster default-k8s-diff-port-892233
	I1031 00:09:57.481224  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:00.553221  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:09:59.270407  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:09:59.270449  249055 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:09:59.270460  249055 cache.go:56] Caching tarball of preloaded images
	I1031 00:09:59.270553  249055 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:09:59.270569  249055 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:09:59.270702  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:09:59.270937  249055 start.go:365] acquiring machines lock for default-k8s-diff-port-892233: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:10:06.633217  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:09.705265  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:15.785240  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:18.857227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:24.937215  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:28.009292  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:34.089205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:37.161208  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:43.241288  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:46.313160  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:52.393273  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:55.465205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:01.545192  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:04.617227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:10.697233  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:13.769258  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:19.849250  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:22.921270  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:29.001178  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:32.073257  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:38.153271  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:41.225244  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:47.305235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:50.377235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:53.381665  248387 start.go:369] acquired machines lock for "no-preload-640155" in 4m7.945210729s
	I1031 00:11:53.381722  248387 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:11:53.381734  248387 fix.go:54] fixHost starting: 
	I1031 00:11:53.382372  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:11:53.382418  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:11:53.397155  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1031 00:11:53.397704  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:11:53.398181  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:11:53.398206  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:11:53.398561  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:11:53.398761  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:11:53.398909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:11:53.400611  248387 fix.go:102] recreateIfNeeded on no-preload-640155: state=Stopped err=<nil>
	I1031 00:11:53.400634  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	W1031 00:11:53.400782  248387 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:11:53.402394  248387 out.go:177] * Restarting existing kvm2 VM for "no-preload-640155" ...
	I1031 00:11:53.403767  248387 main.go:141] libmachine: (no-preload-640155) Calling .Start
	I1031 00:11:53.403944  248387 main.go:141] libmachine: (no-preload-640155) Ensuring networks are active...
	I1031 00:11:53.404678  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network default is active
	I1031 00:11:53.405127  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network mk-no-preload-640155 is active
	I1031 00:11:53.405642  248387 main.go:141] libmachine: (no-preload-640155) Getting domain xml...
	I1031 00:11:53.406300  248387 main.go:141] libmachine: (no-preload-640155) Creating domain...
	I1031 00:11:54.646418  248387 main.go:141] libmachine: (no-preload-640155) Waiting to get IP...
	I1031 00:11:54.647560  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.647956  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.648034  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.647947  249366 retry.go:31] will retry after 237.521879ms: waiting for machine to come up
	I1031 00:11:54.887446  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.887861  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.887895  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.887804  249366 retry.go:31] will retry after 320.996838ms: waiting for machine to come up
	I1031 00:11:53.379251  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:11:53.379302  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:11:53.381458  248084 machine.go:91] provisioned docker machine in 4m37.397131013s
	I1031 00:11:53.381513  248084 fix.go:56] fixHost completed within 4m37.420319931s
	I1031 00:11:53.381528  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 4m37.420354195s
	W1031 00:11:53.381569  248084 start.go:691] error starting host: provision: host is not running
	W1031 00:11:53.381676  248084 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1031 00:11:53.381687  248084 start.go:706] Will try again in 5 seconds ...
	I1031 00:11:55.210309  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.210784  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.210818  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.210728  249366 retry.go:31] will retry after 412.198071ms: waiting for machine to come up
	I1031 00:11:55.624299  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.624689  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.624721  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.624647  249366 retry.go:31] will retry after 596.339141ms: waiting for machine to come up
	I1031 00:11:56.222381  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.222918  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.222952  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.222864  249366 retry.go:31] will retry after 640.775314ms: waiting for machine to come up
	I1031 00:11:56.865881  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.866355  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.866394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.866321  249366 retry.go:31] will retry after 797.697217ms: waiting for machine to come up
	I1031 00:11:57.665413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:57.665930  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:57.665971  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:57.665871  249366 retry.go:31] will retry after 808.934364ms: waiting for machine to come up
	I1031 00:11:58.476161  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:58.476620  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:58.476651  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:58.476582  249366 retry.go:31] will retry after 1.198576442s: waiting for machine to come up
	I1031 00:11:59.676957  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:59.677540  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:59.677575  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:59.677462  249366 retry.go:31] will retry after 1.122967081s: waiting for machine to come up
	I1031 00:11:58.383586  248084 start.go:365] acquiring machines lock for old-k8s-version-225140: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:12:00.801790  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:00.802278  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:00.802313  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:00.802216  249366 retry.go:31] will retry after 2.182263229s: waiting for machine to come up
	I1031 00:12:02.987870  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:02.988307  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:02.988339  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:02.988235  249366 retry.go:31] will retry after 2.73312352s: waiting for machine to come up
	I1031 00:12:05.723196  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:05.723664  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:05.723695  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:05.723595  249366 retry.go:31] will retry after 2.33306923s: waiting for machine to come up
	I1031 00:12:08.060086  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:08.060364  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:08.060394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:08.060328  249366 retry.go:31] will retry after 2.770780436s: waiting for machine to come up
	I1031 00:12:10.834601  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:10.834995  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:10.835020  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:10.834939  249366 retry.go:31] will retry after 4.389090657s: waiting for machine to come up
	I1031 00:12:16.389786  248718 start.go:369] acquired machines lock for "embed-certs-078843" in 3m38.778041195s
	I1031 00:12:16.389855  248718 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:16.389864  248718 fix.go:54] fixHost starting: 
	I1031 00:12:16.390317  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:16.390362  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:16.407875  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I1031 00:12:16.408273  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:16.408842  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:12:16.408870  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:16.409226  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:16.409404  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:16.409574  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:12:16.410975  248718 fix.go:102] recreateIfNeeded on embed-certs-078843: state=Stopped err=<nil>
	I1031 00:12:16.411013  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	W1031 00:12:16.411196  248718 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:16.413529  248718 out.go:177] * Restarting existing kvm2 VM for "embed-certs-078843" ...
	I1031 00:12:16.414858  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Start
	I1031 00:12:16.415041  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring networks are active...
	I1031 00:12:16.415738  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network default is active
	I1031 00:12:16.416116  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network mk-embed-certs-078843 is active
	I1031 00:12:16.416450  248718 main.go:141] libmachine: (embed-certs-078843) Getting domain xml...
	I1031 00:12:16.417190  248718 main.go:141] libmachine: (embed-certs-078843) Creating domain...
	I1031 00:12:15.226912  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227453  248387 main.go:141] libmachine: (no-preload-640155) Found IP for machine: 192.168.61.168
	I1031 00:12:15.227473  248387 main.go:141] libmachine: (no-preload-640155) Reserving static IP address...
	I1031 00:12:15.227513  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has current primary IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227861  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.227890  248387 main.go:141] libmachine: (no-preload-640155) DBG | skip adding static IP to network mk-no-preload-640155 - found existing host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"}
	I1031 00:12:15.227900  248387 main.go:141] libmachine: (no-preload-640155) Reserved static IP address: 192.168.61.168
	I1031 00:12:15.227919  248387 main.go:141] libmachine: (no-preload-640155) Waiting for SSH to be available...
	I1031 00:12:15.227938  248387 main.go:141] libmachine: (no-preload-640155) DBG | Getting to WaitForSSH function...
	I1031 00:12:15.230076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230450  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.230556  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230578  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH client type: external
	I1031 00:12:15.230601  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa (-rw-------)
	I1031 00:12:15.230646  248387 main.go:141] libmachine: (no-preload-640155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:15.230666  248387 main.go:141] libmachine: (no-preload-640155) DBG | About to run SSH command:
	I1031 00:12:15.230678  248387 main.go:141] libmachine: (no-preload-640155) DBG | exit 0
	I1031 00:12:15.316515  248387 main.go:141] libmachine: (no-preload-640155) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:15.316855  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetConfigRaw
	I1031 00:12:15.317658  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.320306  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.320647  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.320679  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.321008  248387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/config.json ...
	I1031 00:12:15.321252  248387 machine.go:88] provisioning docker machine ...
	I1031 00:12:15.321275  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:15.321492  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321669  248387 buildroot.go:166] provisioning hostname "no-preload-640155"
	I1031 00:12:15.321691  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321858  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.324151  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324480  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.324518  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.324849  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325057  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325237  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.325416  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.325795  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.325815  248387 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-640155 && echo "no-preload-640155" | sudo tee /etc/hostname
	I1031 00:12:15.450048  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-640155
	
	I1031 00:12:15.450079  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.452951  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453298  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.453344  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.453657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453800  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453899  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.454055  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.454540  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.454569  248387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-640155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-640155/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-640155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:15.574041  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:15.574072  248387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:15.574104  248387 buildroot.go:174] setting up certificates
	I1031 00:12:15.574116  248387 provision.go:83] configureAuth start
	I1031 00:12:15.574125  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.574451  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.577558  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578020  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.578059  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578197  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.580453  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.580832  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.580876  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.581078  248387 provision.go:138] copyHostCerts
	I1031 00:12:15.581171  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:15.581184  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:15.581256  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:15.581407  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:15.581420  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:15.581453  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:15.581522  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:15.581530  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:15.581560  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:15.581611  248387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.no-preload-640155 san=[192.168.61.168 192.168.61.168 localhost 127.0.0.1 minikube no-preload-640155]
	I1031 00:12:15.693832  248387 provision.go:172] copyRemoteCerts
	I1031 00:12:15.693906  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:15.693934  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.696811  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697210  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.697258  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697471  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.697683  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.697870  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.698054  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:15.781207  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:15.803665  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:15.826369  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:12:15.849259  248387 provision.go:86] duration metric: configureAuth took 275.127597ms
	I1031 00:12:15.849292  248387 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:15.849476  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:15.849565  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.852413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.852804  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.852848  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.853027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.853227  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853440  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853549  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.853724  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.854104  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.854132  248387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:16.147033  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:16.147078  248387 machine.go:91] provisioned docker machine in 825.808812ms
	I1031 00:12:16.147094  248387 start.go:300] post-start starting for "no-preload-640155" (driver="kvm2")
	I1031 00:12:16.147110  248387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:16.147138  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.147515  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:16.147545  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.150321  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150755  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.150798  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.151155  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.151335  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.151493  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.237897  248387 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:16.242343  248387 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:16.242367  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:16.242440  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:16.242526  248387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:16.242636  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:16.250454  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:16.273390  248387 start.go:303] post-start completed in 126.280341ms
	I1031 00:12:16.273411  248387 fix.go:56] fixHost completed within 22.891678533s
	I1031 00:12:16.273433  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.276291  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276598  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.276630  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276761  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.276989  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277270  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277434  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.277621  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:16.277984  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:16.277998  248387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:16.389581  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711136.336935137
	
	I1031 00:12:16.389607  248387 fix.go:206] guest clock: 1698711136.336935137
	I1031 00:12:16.389621  248387 fix.go:219] Guest: 2023-10-31 00:12:16.336935137 +0000 UTC Remote: 2023-10-31 00:12:16.273414732 +0000 UTC m=+271.294357841 (delta=63.520405ms)
	I1031 00:12:16.389652  248387 fix.go:190] guest clock delta is within tolerance: 63.520405ms
	I1031 00:12:16.389659  248387 start.go:83] releasing machines lock for "no-preload-640155", held for 23.007957251s
	I1031 00:12:16.389694  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.390027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:16.392988  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393466  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.393493  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393639  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394137  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394306  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394401  248387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:16.394449  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.394583  248387 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:16.394619  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.397387  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397690  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397757  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.397785  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397927  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398140  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398174  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.398206  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.398296  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398503  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.398616  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398784  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398936  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.520353  248387 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:16.526647  248387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:16.673048  248387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:16.679657  248387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:16.679738  248387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:16.699616  248387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:16.699643  248387 start.go:472] detecting cgroup driver to use...
	I1031 00:12:16.699706  248387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:16.717466  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:16.729231  248387 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:16.729300  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:16.741665  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:16.754175  248387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:16.855649  248387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:16.990153  248387 docker.go:214] disabling docker service ...
	I1031 00:12:16.990239  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:17.004614  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:17.017251  248387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:17.143006  248387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:17.257321  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:17.271045  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:17.288903  248387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:17.289001  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.298419  248387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:17.298516  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.308045  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.317176  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.327039  248387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:17.337269  248387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:17.345814  248387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:17.345886  248387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:17.359110  248387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:17.369376  248387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:17.480359  248387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:17.658006  248387 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:17.658099  248387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:17.663296  248387 start.go:540] Will wait 60s for crictl version
	I1031 00:12:17.663467  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:17.667483  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:17.709866  248387 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:17.709956  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.757817  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.812918  248387 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:17.814541  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:17.818008  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818445  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:17.818482  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818745  248387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:17.822914  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:17.837885  248387 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:17.837941  248387 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:17.874977  248387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:17.875010  248387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:12:17.875097  248387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.875104  248387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.875130  248387 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.875163  248387 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1031 00:12:17.875181  248387 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.875233  248387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.875297  248387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.875306  248387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876689  248387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.876731  248387 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.876696  248387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.876697  248387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.876695  248387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.876704  248387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1031 00:12:18.053090  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.059240  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1031 00:12:18.059239  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.065016  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.069953  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.071229  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.140026  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.149728  248387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1031 00:12:18.149778  248387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.149835  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.172611  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.238794  248387 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1031 00:12:18.238851  248387 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.238913  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331173  248387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1031 00:12:18.331228  248387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.331279  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331278  248387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1031 00:12:18.331370  248387 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1031 00:12:18.331380  248387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.331401  248387 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.331425  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331441  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331463  248387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1031 00:12:18.331503  248387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.331542  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.331584  248387 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1031 00:12:18.331632  248387 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.331665  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331545  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331591  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.348470  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.348506  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.348570  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.348619  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.484280  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.484369  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1031 00:12:18.484436  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1031 00:12:18.484501  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:18.484532  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.513117  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1031 00:12:18.513211  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1031 00:12:18.513238  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:18.513264  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1031 00:12:18.513307  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:18.513347  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:18.513392  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1031 00:12:18.513515  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:18.541278  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1031 00:12:18.541307  248387 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541340  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1031 00:12:18.541348  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1031 00:12:18.541370  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541416  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1031 00:12:18.541466  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:18.541493  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1031 00:12:18.541547  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1031 00:12:18.541549  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1031 00:12:17.727796  248718 main.go:141] libmachine: (embed-certs-078843) Waiting to get IP...
	I1031 00:12:17.728716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:17.729132  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:17.729165  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:17.729087  249483 retry.go:31] will retry after 294.663443ms: waiting for machine to come up
	I1031 00:12:18.025671  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.026112  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.026145  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.026058  249483 retry.go:31] will retry after 377.887631ms: waiting for machine to come up
	I1031 00:12:18.405434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.405878  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.405961  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.405857  249483 retry.go:31] will retry after 459.989463ms: waiting for machine to come up
	I1031 00:12:18.867094  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.867658  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.867693  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.867590  249483 retry.go:31] will retry after 552.876869ms: waiting for machine to come up
	I1031 00:12:19.422232  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.422678  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.422711  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.422642  249483 retry.go:31] will retry after 574.514705ms: waiting for machine to come up
	I1031 00:12:19.998587  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.999158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.999195  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.999071  249483 retry.go:31] will retry after 903.246228ms: waiting for machine to come up
	I1031 00:12:20.904654  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:20.905083  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:20.905118  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:20.905028  249483 retry.go:31] will retry after 1.161301577s: waiting for machine to come up
	I1031 00:12:22.067416  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:22.067874  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:22.067906  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:22.067843  249483 retry.go:31] will retry after 1.350619049s: waiting for machine to come up
	I1031 00:12:23.419771  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:23.420313  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:23.420343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:23.420276  249483 retry.go:31] will retry after 1.783701579s: waiting for machine to come up
	I1031 00:12:25.206301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:25.206880  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:25.206909  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:25.206820  249483 retry.go:31] will retry after 2.304762715s: waiting for machine to come up
	I1031 00:12:25.834889  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.293473845s)
	I1031 00:12:25.834930  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1031 00:12:25.834949  248387 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (7.293455157s)
	I1031 00:12:25.834967  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:25.834986  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1031 00:12:25.835039  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:28.718454  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.883305744s)
	I1031 00:12:28.718498  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1031 00:12:28.718536  248387 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:28.718602  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:27.513250  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:27.513691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:27.513726  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:27.513617  249483 retry.go:31] will retry after 2.77005827s: waiting for machine to come up
	I1031 00:12:30.287716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:30.288125  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:30.288181  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:30.288095  249483 retry.go:31] will retry after 2.359494113s: waiting for machine to come up
	I1031 00:12:30.082206  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.363538098s)
	I1031 00:12:30.082241  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1031 00:12:30.082284  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:30.082378  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:32.754830  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.672412397s)
	I1031 00:12:32.754865  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1031 00:12:32.754922  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:32.755008  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:34.104402  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.3493522s)
	I1031 00:12:34.104443  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1031 00:12:34.104484  248387 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:34.104528  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:36.966451  249055 start.go:369] acquired machines lock for "default-k8s-diff-port-892233" in 2m37.695455763s
	I1031 00:12:36.966568  249055 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:36.966579  249055 fix.go:54] fixHost starting: 
	I1031 00:12:36.966927  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:36.966965  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:36.985392  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I1031 00:12:36.985889  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:36.986473  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:12:36.986501  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:36.986870  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:36.987100  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:36.987295  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:12:36.989416  249055 fix.go:102] recreateIfNeeded on default-k8s-diff-port-892233: state=Stopped err=<nil>
	I1031 00:12:36.989470  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	W1031 00:12:36.989641  249055 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:36.991746  249055 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-892233" ...
	I1031 00:12:32.648970  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:32.649516  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:32.649563  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:32.649477  249483 retry.go:31] will retry after 2.827972253s: waiting for machine to come up
	I1031 00:12:35.479127  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479655  248718 main.go:141] libmachine: (embed-certs-078843) Found IP for machine: 192.168.50.2
	I1031 00:12:35.479691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has current primary IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479703  248718 main.go:141] libmachine: (embed-certs-078843) Reserving static IP address...
	I1031 00:12:35.480200  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.480259  248718 main.go:141] libmachine: (embed-certs-078843) DBG | skip adding static IP to network mk-embed-certs-078843 - found existing host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"}
	I1031 00:12:35.480299  248718 main.go:141] libmachine: (embed-certs-078843) Reserved static IP address: 192.168.50.2
	I1031 00:12:35.480319  248718 main.go:141] libmachine: (embed-certs-078843) Waiting for SSH to be available...
	I1031 00:12:35.480334  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Getting to WaitForSSH function...
	I1031 00:12:35.482640  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483140  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.483177  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH client type: external
	I1031 00:12:35.483373  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa (-rw-------)
	I1031 00:12:35.483409  248718 main.go:141] libmachine: (embed-certs-078843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:35.483434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | About to run SSH command:
	I1031 00:12:35.483453  248718 main.go:141] libmachine: (embed-certs-078843) DBG | exit 0
	I1031 00:12:35.573283  248718 main.go:141] libmachine: (embed-certs-078843) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:35.573731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetConfigRaw
	I1031 00:12:35.574538  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.577369  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.577820  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.577856  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.578175  248718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/config.json ...
	I1031 00:12:35.578461  248718 machine.go:88] provisioning docker machine ...
	I1031 00:12:35.578486  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:35.578719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.578919  248718 buildroot.go:166] provisioning hostname "embed-certs-078843"
	I1031 00:12:35.578946  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.579137  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.581632  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582041  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.582075  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582185  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.582376  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582556  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582694  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.582864  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.583247  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.583268  248718 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-078843 && echo "embed-certs-078843" | sudo tee /etc/hostname
	I1031 00:12:35.717684  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-078843
	
	I1031 00:12:35.717719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.720882  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721264  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.721299  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721514  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.721732  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.721908  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.722057  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.722318  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.722757  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.722777  248718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-078843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-078843/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-078843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:35.865568  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:35.865626  248718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:35.865667  248718 buildroot.go:174] setting up certificates
	I1031 00:12:35.865682  248718 provision.go:83] configureAuth start
	I1031 00:12:35.865696  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.866070  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.869149  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869571  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.869610  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.872260  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872618  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.872665  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872855  248718 provision.go:138] copyHostCerts
	I1031 00:12:35.872978  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:35.873000  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:35.873069  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:35.873192  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:35.873203  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:35.873234  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:35.873316  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:35.873327  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:35.873352  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:35.873426  248718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.embed-certs-078843 san=[192.168.50.2 192.168.50.2 localhost 127.0.0.1 minikube embed-certs-078843]
	I1031 00:12:36.016430  248718 provision.go:172] copyRemoteCerts
	I1031 00:12:36.016506  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:36.016553  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.019662  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020054  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.020088  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020286  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.020505  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.020658  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.020843  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.111793  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:36.140569  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:36.179708  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:12:36.203348  248718 provision.go:86] duration metric: configureAuth took 337.646698ms
	I1031 00:12:36.203385  248718 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:36.203690  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:36.203835  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.207444  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.207883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.207923  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.208236  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.208498  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208690  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208912  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.209163  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.209521  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.209547  248718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:36.711502  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:36.711535  248718 machine.go:91] provisioned docker machine in 1.133056882s
	I1031 00:12:36.711550  248718 start.go:300] post-start starting for "embed-certs-078843" (driver="kvm2")
	I1031 00:12:36.711563  248718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:36.711587  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.711984  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:36.712027  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.714954  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715374  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.715408  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715610  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.715815  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.716019  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.716192  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.803613  248718 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:36.808855  248718 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:36.808888  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:36.808973  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:36.809100  248718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:36.809240  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:36.818339  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:36.845738  248718 start.go:303] post-start completed in 134.172265ms
	I1031 00:12:36.845765  248718 fix.go:56] fixHost completed within 20.4559017s
	I1031 00:12:36.845788  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.848249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848592  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.848621  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848861  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.849120  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849307  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849462  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.849659  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.850033  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.850047  248718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:36.966267  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711156.912809532
	
	I1031 00:12:36.966293  248718 fix.go:206] guest clock: 1698711156.912809532
	I1031 00:12:36.966303  248718 fix.go:219] Guest: 2023-10-31 00:12:36.912809532 +0000 UTC Remote: 2023-10-31 00:12:36.845768911 +0000 UTC m=+239.388163644 (delta=67.040621ms)
	I1031 00:12:36.966329  248718 fix.go:190] guest clock delta is within tolerance: 67.040621ms
	I1031 00:12:36.966341  248718 start.go:83] releasing machines lock for "embed-certs-078843", held for 20.576516085s
	I1031 00:12:36.966380  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.967388  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:36.970301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970734  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.970766  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970934  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971468  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971683  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971781  248718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:36.971832  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.971921  248718 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:36.971951  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.974873  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975244  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975323  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975420  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975692  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975718  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975759  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975901  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975959  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976068  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976221  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976279  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976358  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.977011  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:37.095751  248718 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:37.101600  248718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:37.244676  248718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:37.253623  248718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:37.253702  248718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:37.272872  248718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:37.272897  248718 start.go:472] detecting cgroup driver to use...
	I1031 00:12:37.272992  248718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:37.290899  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:37.306570  248718 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:37.306633  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:37.321827  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:37.336787  248718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:37.451589  248718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:37.571290  248718 docker.go:214] disabling docker service ...
	I1031 00:12:37.571375  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:37.587764  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:37.600627  248718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:37.733539  248718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:37.850154  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:37.865463  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:37.883661  248718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:37.883728  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.892717  248718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:37.892783  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.901944  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.911061  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.920094  248718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:37.929520  248718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:37.937333  248718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:37.937404  248718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:37.949591  248718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:37.960061  248718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:38.076354  248718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:38.250618  248718 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:38.250688  248718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:38.255979  248718 start.go:540] Will wait 60s for crictl version
	I1031 00:12:38.256036  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:12:38.259822  248718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:38.299812  248718 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:38.299981  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.343088  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.397252  248718 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:36.993369  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Start
	I1031 00:12:36.993641  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring networks are active...
	I1031 00:12:36.994545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network default is active
	I1031 00:12:36.994911  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network mk-default-k8s-diff-port-892233 is active
	I1031 00:12:36.995448  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Getting domain xml...
	I1031 00:12:36.996378  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Creating domain...
	I1031 00:12:38.342502  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting to get IP...
	I1031 00:12:38.343505  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344038  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.344004  249635 retry.go:31] will retry after 206.530958ms: waiting for machine to come up
	I1031 00:12:38.552789  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553109  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.553059  249635 retry.go:31] will retry after 272.962928ms: waiting for machine to come up
	I1031 00:12:38.827741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828288  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.828242  249635 retry.go:31] will retry after 411.85264ms: waiting for machine to come up
	I1031 00:12:35.048294  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1031 00:12:35.048344  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:35.048404  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:36.902739  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.854307965s)
	I1031 00:12:36.902771  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1031 00:12:36.902803  248387 cache_images.go:123] Successfully loaded all cached images
	I1031 00:12:36.902810  248387 cache_images.go:92] LoadImages completed in 19.027785915s
	I1031 00:12:36.902926  248387 ssh_runner.go:195] Run: crio config
	I1031 00:12:36.961891  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:36.961922  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:36.961950  248387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:36.961992  248387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.168 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-640155 NodeName:no-preload-640155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:36.962203  248387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-640155"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:36.962312  248387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-640155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:36.962389  248387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:36.973945  248387 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:36.974026  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:36.987534  248387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1031 00:12:37.006510  248387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:37.025092  248387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1031 00:12:37.045090  248387 ssh_runner.go:195] Run: grep 192.168.61.168	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:37.048822  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:37.061985  248387 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155 for IP: 192.168.61.168
	I1031 00:12:37.062026  248387 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:37.062243  248387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:37.062310  248387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:37.062410  248387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.key
	I1031 00:12:37.062508  248387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key.96e3443b
	I1031 00:12:37.062570  248387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key
	I1031 00:12:37.062707  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:37.062750  248387 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:37.062767  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:37.062832  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:37.062877  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:37.062923  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:37.062987  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:37.063745  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:37.090011  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:37.119401  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:37.148361  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:12:37.173730  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:37.197769  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:37.221625  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:37.244497  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:37.274559  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:37.300372  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:37.332082  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:37.361826  248387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:37.380561  248387 ssh_runner.go:195] Run: openssl version
	I1031 00:12:37.386185  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:37.396710  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401896  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401983  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.407778  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:37.418091  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:37.427985  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432581  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432649  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.438103  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:37.447792  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:37.457689  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462421  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462495  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.468482  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:37.478565  248387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:37.483264  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:37.491175  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:37.498212  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:37.504019  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:37.509730  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:37.516218  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:37.523364  248387 kubeadm.go:404] StartCluster: {Name:no-preload-640155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:37.523465  248387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:37.523522  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:37.576223  248387 cri.go:89] found id: ""
	I1031 00:12:37.576314  248387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:37.586094  248387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:37.586133  248387 kubeadm.go:636] restartCluster start
	I1031 00:12:37.586217  248387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:37.595614  248387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.596791  248387 kubeconfig.go:92] found "no-preload-640155" server: "https://192.168.61.168:8443"
	I1031 00:12:37.600710  248387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:37.610066  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.610137  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.620501  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.620528  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.620578  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.630477  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.131205  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.131335  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.144627  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.631491  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.631587  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.647034  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.131616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.131749  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.148723  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.631171  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.631273  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.645807  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.398862  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:38.401804  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:38.402193  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402475  248718 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:38.407041  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:38.421147  248718 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:38.421228  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:38.461162  248718 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:38.461240  248718 ssh_runner.go:195] Run: which lz4
	I1031 00:12:38.465401  248718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:12:38.469796  248718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:12:38.469833  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:12:40.419642  248718 crio.go:444] Took 1.954260 seconds to copy over tarball
	I1031 00:12:40.419721  248718 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:12:39.241872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242407  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242465  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.242347  249635 retry.go:31] will retry after 371.774477ms: waiting for machine to come up
	I1031 00:12:39.616171  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616708  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616747  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.616671  249635 retry.go:31] will retry after 487.120901ms: waiting for machine to come up
	I1031 00:12:40.105492  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106116  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106151  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.106066  249635 retry.go:31] will retry after 767.19349ms: waiting for machine to come up
	I1031 00:12:40.875432  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.875932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.876009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.875892  249635 retry.go:31] will retry after 976.411998ms: waiting for machine to come up
	I1031 00:12:41.854227  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854759  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854794  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:41.854691  249635 retry.go:31] will retry after 1.041793781s: waiting for machine to come up
	I1031 00:12:42.898223  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898628  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898658  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:42.898577  249635 retry.go:31] will retry after 1.163252223s: waiting for machine to come up
	I1031 00:12:44.064217  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064593  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064626  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:44.064543  249635 retry.go:31] will retry after 1.879015473s: waiting for machine to come up
	I1031 00:12:40.131216  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.131331  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.146846  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:40.630673  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.630747  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.642955  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.131275  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.131410  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.144530  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.631108  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.631219  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.645873  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.131506  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.131641  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.147504  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.630664  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.630769  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.645755  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.131375  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.131503  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.143357  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.631616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.631714  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.647203  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.130693  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.130791  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.143566  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.630736  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.630816  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.642486  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.535831  248718 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.116078442s)
	I1031 00:12:43.535864  248718 crio.go:451] Took 3.116189 seconds to extract the tarball
	I1031 00:12:43.535877  248718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:12:43.579902  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:43.635701  248718 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:12:43.635724  248718 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:12:43.635796  248718 ssh_runner.go:195] Run: crio config
	I1031 00:12:43.714916  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:12:43.714939  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:43.714958  248718 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:43.714976  248718 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-078843 NodeName:embed-certs-078843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:43.715146  248718 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-078843"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:43.715232  248718 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-078843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:43.715295  248718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:43.726847  248718 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:43.726938  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:43.738352  248718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1031 00:12:43.756439  248718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:43.773955  248718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1031 00:12:43.793790  248718 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:43.798155  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:43.811602  248718 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843 for IP: 192.168.50.2
	I1031 00:12:43.811649  248718 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:43.811819  248718 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:43.811877  248718 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:43.811963  248718 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/client.key
	I1031 00:12:43.812051  248718 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key.e10f976c
	I1031 00:12:43.812117  248718 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key
	I1031 00:12:43.812261  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:43.812301  248718 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:43.812317  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:43.812359  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:43.812395  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:43.812430  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:43.812491  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:43.813192  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:43.841097  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:43.867995  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:43.892834  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:12:43.917649  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:43.942299  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:43.971154  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:43.995032  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:44.022277  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:44.047549  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:44.071370  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:44.095933  248718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:44.113479  248718 ssh_runner.go:195] Run: openssl version
	I1031 00:12:44.119266  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:44.133710  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140098  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140180  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.146416  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:44.159207  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:44.171618  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178288  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178375  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.186339  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:44.200864  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:44.212513  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217549  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217616  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.225170  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:44.239600  248718 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:44.244470  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:44.252637  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:44.260635  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:44.269017  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:44.277210  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:44.285394  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:44.293419  248718 kubeadm.go:404] StartCluster: {Name:embed-certs-078843 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:44.293507  248718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:44.293620  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:44.339212  248718 cri.go:89] found id: ""
	I1031 00:12:44.339302  248718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:44.350219  248718 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:44.350249  248718 kubeadm.go:636] restartCluster start
	I1031 00:12:44.350315  248718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:44.360185  248718 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.361826  248718 kubeconfig.go:92] found "embed-certs-078843" server: "https://192.168.50.2:8443"
	I1031 00:12:44.365579  248718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:44.376923  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.377021  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.390684  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.390708  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.390768  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.404614  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.905332  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.905451  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.918162  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.405760  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.405845  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.419071  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.905669  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.905770  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.922243  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.404757  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.404870  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.419662  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.905223  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.905328  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.919993  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.405571  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.405660  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.418433  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.944837  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945386  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945422  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:45.945318  249635 retry.go:31] will retry after 1.840120385s: waiting for machine to come up
	I1031 00:12:47.787276  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787807  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:47.787751  249635 retry.go:31] will retry after 2.306470153s: waiting for machine to come up
	I1031 00:12:45.131185  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.225229  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.237425  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.630872  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.630948  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.644580  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.131199  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.131280  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.142872  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.631467  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.631545  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.648339  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.130861  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.131000  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.146189  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.610939  248387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:47.610999  248387 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:47.611016  248387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:47.611107  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:47.656888  248387 cri.go:89] found id: ""
	I1031 00:12:47.656982  248387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:47.678724  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:47.688879  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:47.688985  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697091  248387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697115  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:47.837056  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.448497  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.639877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.735406  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.824428  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:48.824521  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:48.840207  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.357050  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.857029  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:47.905449  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.905552  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.921939  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.405557  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.405656  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.417674  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.905114  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.905225  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.919218  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.404811  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.404908  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.420062  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.905655  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.905769  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.922828  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.405471  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.405578  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.423259  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.904727  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.904819  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.920673  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.405155  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.405246  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.421731  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.905024  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.905101  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.919385  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:52.404843  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.404985  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.420088  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.095827  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096365  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:50.096281  249635 retry.go:31] will retry after 3.872051375s: waiting for machine to come up
	I1031 00:12:53.970393  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970918  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970956  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:53.970839  249635 retry.go:31] will retry after 5.345847198s: waiting for machine to come up
	I1031 00:12:50.357101  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:50.857024  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.357298  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.380143  248387 api_server.go:72] duration metric: took 2.555721824s to wait for apiserver process to appear ...
	I1031 00:12:51.380180  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:51.380220  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.457683  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.457719  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:54.457733  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.509385  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.509424  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:55.010185  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.017172  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.017201  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:55.510171  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.517062  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.517114  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:56.009671  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:56.017135  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:12:56.026278  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:12:56.026307  248387 api_server.go:131] duration metric: took 4.646117858s to wait for apiserver health ...
	I1031 00:12:56.026319  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:56.026331  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:56.028208  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:12:52.904735  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.904835  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.917320  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.405426  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.405546  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.420386  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.904921  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.905039  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.917303  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:54.377921  248718 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:54.377976  248718 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:54.377991  248718 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:54.378079  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:54.418685  248718 cri.go:89] found id: ""
	I1031 00:12:54.418768  248718 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:54.436536  248718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:54.451466  248718 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:54.451534  248718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464460  248718 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464484  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:54.601286  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.468262  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.664604  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.761171  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.838690  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:55.838793  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:55.857817  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.379368  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.878782  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:57.379756  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.029552  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:12:56.078774  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:12:56.128262  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:12:56.139995  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:12:56.140025  248387 system_pods.go:61] "coredns-5dd5756b68-qbvjb" [92f771d8-381b-4e38-945f-ad5ceae72b80] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:12:56.140035  248387 system_pods.go:61] "etcd-no-preload-640155" [44fcbc32-757b-4406-97ed-88ad76ae4eee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:12:56.140042  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [b92b3dec-827f-4221-8c28-83a738186e52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:12:56.140048  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [62661788-bde2-42b9-9469-a2f2c51ee283] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:12:56.140057  248387 system_pods.go:61] "kube-proxy-rv76j" [293b1dd9-fc85-4647-91c9-874ad357d222] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:12:56.140063  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [6a11d962-b407-467e-b8a0-9a101b16e4d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:12:56.140076  248387 system_pods.go:61] "metrics-server-57f55c9bc5-nm8dj" [3924727e-2734-497d-b1b1-d8f9a0ab095a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:12:56.140090  248387 system_pods.go:61] "storage-provisioner" [f8e0a3fa-eaf1-45e1-afbc-a5b2287e7703] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:12:56.140100  248387 system_pods.go:74] duration metric: took 11.816257ms to wait for pod list to return data ...
	I1031 00:12:56.140110  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:12:56.143298  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:12:56.143327  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:12:56.143365  248387 node_conditions.go:105] duration metric: took 3.247248ms to run NodePressure ...
	I1031 00:12:56.143402  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:56.398227  248387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403101  248387 kubeadm.go:787] kubelet initialised
	I1031 00:12:56.403124  248387 kubeadm.go:788] duration metric: took 4.866042ms waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403134  248387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:12:56.408758  248387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.416185  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416218  248387 pod_ready.go:81] duration metric: took 7.431969ms waiting for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.416229  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416238  248387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.421589  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421611  248387 pod_ready.go:81] duration metric: took 5.364261ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.421619  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421624  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.427046  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427075  248387 pod_ready.go:81] duration metric: took 5.443698ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.427086  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427098  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.534169  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534224  248387 pod_ready.go:81] duration metric: took 107.102474ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.534241  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534255  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332793  248387 pod_ready.go:92] pod "kube-proxy-rv76j" in "kube-system" namespace has status "Ready":"True"
	I1031 00:12:57.332824  248387 pod_ready.go:81] duration metric: took 798.55794ms waiting for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332838  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:59.642186  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:00.818958  248084 start.go:369] acquired machines lock for "old-k8s-version-225140" in 1m2.435313483s
	I1031 00:13:00.819017  248084 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:13:00.819032  248084 fix.go:54] fixHost starting: 
	I1031 00:13:00.819456  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:00.819490  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:00.838737  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1031 00:13:00.839191  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:00.839773  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:13:00.839794  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:00.840290  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:00.840514  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:00.840697  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:13:00.843346  248084 fix.go:102] recreateIfNeeded on old-k8s-version-225140: state=Stopped err=<nil>
	I1031 00:13:00.843381  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	W1031 00:13:00.843658  248084 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:13:00.848997  248084 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-225140" ...
	I1031 00:12:59.318443  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319011  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Found IP for machine: 192.168.39.2
	I1031 00:12:59.319037  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserving static IP address...
	I1031 00:12:59.319070  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has current primary IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319522  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.319557  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserved static IP address: 192.168.39.2
	I1031 00:12:59.319595  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | skip adding static IP to network mk-default-k8s-diff-port-892233 - found existing host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"}
	I1031 00:12:59.319620  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Getting to WaitForSSH function...
	I1031 00:12:59.319653  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for SSH to be available...
	I1031 00:12:59.322357  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322780  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.322819  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322938  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH client type: external
	I1031 00:12:59.322969  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa (-rw-------)
	I1031 00:12:59.323009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:59.323029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | About to run SSH command:
	I1031 00:12:59.323064  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | exit 0
	I1031 00:12:59.421581  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:59.421963  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetConfigRaw
	I1031 00:12:59.422651  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.425540  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.425916  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.425961  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.426201  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:12:59.426454  249055 machine.go:88] provisioning docker machine ...
	I1031 00:12:59.426481  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:59.426720  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.426879  249055 buildroot.go:166] provisioning hostname "default-k8s-diff-port-892233"
	I1031 00:12:59.426898  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.427067  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.429588  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.429937  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.429975  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.430208  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.430403  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430573  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430690  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.430852  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.431368  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.431386  249055 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-892233 && echo "default-k8s-diff-port-892233" | sudo tee /etc/hostname
	I1031 00:12:59.572253  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-892233
	
	I1031 00:12:59.572299  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.575534  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.575858  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.575919  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.576140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.576366  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576592  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576766  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.576919  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.577349  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.577372  249055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-892233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-892233/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-892233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:59.714987  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:59.715020  249055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:59.715079  249055 buildroot.go:174] setting up certificates
	I1031 00:12:59.715094  249055 provision.go:83] configureAuth start
	I1031 00:12:59.715115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.715440  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.718485  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.718900  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.718932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.719039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.721488  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.721844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.721874  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.722068  249055 provision.go:138] copyHostCerts
	I1031 00:12:59.722141  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:59.722155  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:59.722227  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:59.722363  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:59.722377  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:59.722402  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:59.722528  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:59.722538  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:59.722560  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:59.722619  249055 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-892233 san=[192.168.39.2 192.168.39.2 localhost 127.0.0.1 minikube default-k8s-diff-port-892233]
	I1031 00:13:00.038821  249055 provision.go:172] copyRemoteCerts
	I1031 00:13:00.038892  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:00.038924  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.042237  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042585  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.042627  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042753  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.042976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.043252  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.043410  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.130665  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:00.158853  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1031 00:13:00.188023  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:13:00.214990  249055 provision.go:86] duration metric: configureAuth took 499.878655ms
	I1031 00:13:00.215020  249055 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:00.215284  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:00.215445  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.218339  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.218821  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.218861  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.219039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.219282  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219500  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219672  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.219873  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.220371  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.220411  249055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:00.567578  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:00.567663  249055 machine.go:91] provisioned docker machine in 1.141189726s
	I1031 00:13:00.567680  249055 start.go:300] post-start starting for "default-k8s-diff-port-892233" (driver="kvm2")
	I1031 00:13:00.567695  249055 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:00.567719  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.568094  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:00.568134  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.570983  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571434  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.571478  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571649  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.571849  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.572010  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.572173  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.660300  249055 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:00.665751  249055 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:00.665779  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:00.665853  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:00.665958  249055 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:00.666046  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:00.677668  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:00.702125  249055 start.go:303] post-start completed in 134.425173ms
	I1031 00:13:00.702165  249055 fix.go:56] fixHost completed within 23.735576451s
	I1031 00:13:00.702195  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.705554  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.705976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.706029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.706319  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.706545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706722  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.707040  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.707449  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.707470  249055 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:00.818749  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711180.762641951
	
	I1031 00:13:00.818785  249055 fix.go:206] guest clock: 1698711180.762641951
	I1031 00:13:00.818797  249055 fix.go:219] Guest: 2023-10-31 00:13:00.762641951 +0000 UTC Remote: 2023-10-31 00:13:00.70217124 +0000 UTC m=+181.580385758 (delta=60.470711ms)
	I1031 00:13:00.818850  249055 fix.go:190] guest clock delta is within tolerance: 60.470711ms
	I1031 00:13:00.818861  249055 start.go:83] releasing machines lock for "default-k8s-diff-port-892233", held for 23.852333569s
	I1031 00:13:00.818897  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.819199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:00.822674  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823152  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.823194  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823436  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824107  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824336  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824543  249055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:00.824603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.824669  249055 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:00.824698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.827622  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828092  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828149  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828176  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828377  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828420  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828477  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828558  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828638  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828817  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.828926  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.829014  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.829694  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.945937  249055 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:00.951731  249055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:01.099346  249055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:01.106701  249055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:01.106789  249055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:01.122651  249055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:01.122738  249055 start.go:472] detecting cgroup driver to use...
	I1031 00:13:01.122839  249055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:01.140968  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:01.159184  249055 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:01.159267  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:01.176636  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:01.190420  249055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:01.304327  249055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:01.446312  249055 docker.go:214] disabling docker service ...
	I1031 00:13:01.446440  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:01.462043  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:01.478402  249055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:01.618099  249055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:01.745376  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:01.758262  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:01.774927  249055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:13:01.774999  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.784376  249055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:01.784441  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.793769  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.802954  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.813429  249055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:01.822730  249055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:01.832032  249055 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:01.832103  249055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:01.845005  249055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:01.855358  249055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:01.997815  249055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:02.229016  249055 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:02.229090  249055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:02.233980  249055 start.go:540] Will wait 60s for crictl version
	I1031 00:13:02.234044  249055 ssh_runner.go:195] Run: which crictl
	I1031 00:13:02.237901  249055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:02.280450  249055 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:02.280562  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.326608  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.381010  249055 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:57.879480  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.378990  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.401245  248718 api_server.go:72] duration metric: took 2.5625596s to wait for apiserver process to appear ...
	I1031 00:12:58.401294  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:58.401317  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.483261  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.483293  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:01.483309  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.586135  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.586172  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:02.086932  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.095676  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.095714  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:02.586339  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.599335  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.599376  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:03.087312  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:03.095444  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:13:03.107809  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:03.107842  248718 api_server.go:131] duration metric: took 4.706538937s to wait for apiserver health ...
	I1031 00:13:03.107855  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:13:03.107864  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:03.110057  248718 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:02.382546  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:02.386646  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387022  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:02.387068  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387291  249055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:02.393394  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:02.408630  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:13:02.408723  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:02.461303  249055 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:13:02.461388  249055 ssh_runner.go:195] Run: which lz4
	I1031 00:13:02.466160  249055 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:02.472133  249055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:02.472175  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:13:01.647436  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.653247  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.111616  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:03.142561  248718 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:03.210454  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:03.229202  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:03.229253  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:03.229269  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:03.229278  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:03.229289  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:03.229302  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:03.229321  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:03.229339  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:03.229353  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:03.229369  248718 system_pods.go:74] duration metric: took 18.888134ms to wait for pod list to return data ...
	I1031 00:13:03.229379  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:03.269761  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:03.269808  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:03.269821  248718 node_conditions.go:105] duration metric: took 40.435389ms to run NodePressure ...
	I1031 00:13:03.269843  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:03.828792  248718 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840423  248718 kubeadm.go:787] kubelet initialised
	I1031 00:13:03.840449  248718 kubeadm.go:788] duration metric: took 11.631934ms waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840461  248718 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:03.856214  248718 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.885090  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885128  248718 pod_ready.go:81] duration metric: took 28.821802ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.885141  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885169  248718 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.903365  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903468  248718 pod_ready.go:81] duration metric: took 18.286782ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.903494  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903516  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.918470  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918511  248718 pod_ready.go:81] duration metric: took 14.954407ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.918536  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918548  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.933999  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934040  248718 pod_ready.go:81] duration metric: took 15.480835ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.934057  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934068  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.237338  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237374  248718 pod_ready.go:81] duration metric: took 303.296061ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.237389  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237398  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.634179  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634222  248718 pod_ready.go:81] duration metric: took 396.814691ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.634238  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634253  248718 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.035746  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035785  248718 pod_ready.go:81] duration metric: took 401.520697ms waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:05.035801  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035816  248718 pod_ready.go:38] duration metric: took 1.195339888s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:05.035852  248718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:13:05.053467  248718 ops.go:34] apiserver oom_adj: -16
	I1031 00:13:05.053499  248718 kubeadm.go:640] restartCluster took 20.703241237s
	I1031 00:13:05.053510  248718 kubeadm.go:406] StartCluster complete in 20.760104259s
	I1031 00:13:05.053534  248718 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.053649  248718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:13:05.056586  248718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.056927  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:13:05.057035  248718 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:13:05.057123  248718 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-078843"
	I1031 00:13:05.057141  248718 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-078843"
	W1031 00:13:05.057149  248718 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:13:05.057204  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:05.057234  248718 addons.go:69] Setting default-storageclass=true in profile "embed-certs-078843"
	I1031 00:13:05.057211  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.057248  248718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-078843"
	I1031 00:13:05.057647  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057682  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057706  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057743  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057816  248718 addons.go:69] Setting metrics-server=true in profile "embed-certs-078843"
	I1031 00:13:05.057835  248718 addons.go:231] Setting addon metrics-server=true in "embed-certs-078843"
	W1031 00:13:05.057846  248718 addons.go:240] addon metrics-server should already be in state true
	I1031 00:13:05.057940  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.058407  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.058492  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.077590  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I1031 00:13:05.077948  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I1031 00:13:05.078081  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078347  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078769  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.078785  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079028  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.079054  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079408  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085132  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085145  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I1031 00:13:05.085597  248718 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-078843" context rescaled to 1 replicas
	I1031 00:13:05.085640  248718 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:13:05.088029  248718 out.go:177] * Verifying Kubernetes components...
	I1031 00:13:05.085726  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.085922  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.086067  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.089646  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:13:05.089718  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.090571  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.090592  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.091096  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.091945  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.092003  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.095067  248718 addons.go:231] Setting addon default-storageclass=true in "embed-certs-078843"
	W1031 00:13:05.095093  248718 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:13:05.095131  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.095551  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.095608  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.111102  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1031 00:13:05.111739  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.112393  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.112413  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.112797  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.112983  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.114423  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1031 00:13:05.114993  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.115615  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.115634  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.115848  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.116042  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.118503  248718 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:13:05.116288  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.120126  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:13:05.120149  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:13:05.120184  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.120637  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1031 00:13:05.121136  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.121582  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.121601  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.122054  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.122163  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.122536  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.122576  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.124417  248718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:00.852003  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Start
	I1031 00:13:00.853038  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring networks are active...
	I1031 00:13:00.853268  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network default is active
	I1031 00:13:00.853774  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network mk-old-k8s-version-225140 is active
	I1031 00:13:00.854290  248084 main.go:141] libmachine: (old-k8s-version-225140) Getting domain xml...
	I1031 00:13:00.855089  248084 main.go:141] libmachine: (old-k8s-version-225140) Creating domain...
	I1031 00:13:02.250983  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting to get IP...
	I1031 00:13:02.251883  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.252351  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.252421  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.252327  249826 retry.go:31] will retry after 242.989359ms: waiting for machine to come up
	I1031 00:13:02.497099  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.497647  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.497671  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.497581  249826 retry.go:31] will retry after 267.660992ms: waiting for machine to come up
	I1031 00:13:02.767445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.770812  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.770846  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.770757  249826 retry.go:31] will retry after 311.592507ms: waiting for machine to come up
	I1031 00:13:03.085650  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.086233  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.086262  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.086139  249826 retry.go:31] will retry after 594.222148ms: waiting for machine to come up
	I1031 00:13:03.681721  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.682255  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.682286  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.682147  249826 retry.go:31] will retry after 758.043103ms: waiting for machine to come up
	I1031 00:13:04.442274  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:04.443048  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:04.443078  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:04.442997  249826 retry.go:31] will retry after 887.518169ms: waiting for machine to come up
	I1031 00:13:05.332541  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:05.333184  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:05.333212  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:05.333129  249826 retry.go:31] will retry after 851.434462ms: waiting for machine to come up
	I1031 00:13:05.125889  248718 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.125912  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:13:05.125931  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.124466  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.126004  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.126025  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.125276  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.126198  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.126338  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.126414  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.131827  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.131844  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.131883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.131916  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.132049  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.132274  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.132420  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.144729  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I1031 00:13:05.145178  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.145775  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.145795  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.146202  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.146381  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.149644  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.150317  248718 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.150332  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:13:05.150350  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.153417  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.153915  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.153956  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.154082  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.154266  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.154606  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.154731  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.279166  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:13:05.279209  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:13:05.314989  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.318765  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.337844  248718 node_ready.go:35] waiting up to 6m0s for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:05.338209  248718 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 00:13:05.343889  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:13:05.343913  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:13:05.391973  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:05.392002  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:13:05.442745  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.503163864s)
	I1031 00:13:06.822030  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822047  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.506945748s)
	I1031 00:13:06.822097  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822123  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822539  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822568  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822594  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822620  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822641  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822654  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822665  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822689  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822702  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822711  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.823128  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823187  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823196  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.823249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823286  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823305  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.838726  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.838749  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.839036  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.839101  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.839124  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.863966  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.421170822s)
	I1031 00:13:06.864085  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864105  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.864472  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.864499  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.864511  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864520  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.865117  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.865133  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.865136  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.865144  248718 addons.go:467] Verifying addon metrics-server=true in "embed-certs-078843"
	I1031 00:13:06.868351  248718 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:13:06.869950  248718 addons.go:502] enable addons completed in 1.812918702s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:13:07.438581  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.402138  249055 crio.go:444] Took 1.936056 seconds to copy over tarball
	I1031 00:13:04.402221  249055 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:07.956805  249055 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.554540356s)
	I1031 00:13:07.956841  249055 crio.go:451] Took 3.554667 seconds to extract the tarball
	I1031 00:13:07.956854  249055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:08.017763  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:08.072921  249055 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:13:08.072982  249055 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:13:08.073063  249055 ssh_runner.go:195] Run: crio config
	I1031 00:13:08.131013  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:08.131045  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:08.131070  249055 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:08.131099  249055 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-892233 NodeName:default-k8s-diff-port-892233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:13:08.131362  249055 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-892233"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:08.131583  249055 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-892233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1031 00:13:08.131658  249055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:13:08.140884  249055 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:08.140973  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:08.149405  249055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I1031 00:13:08.166006  249055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:08.182874  249055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1031 00:13:08.200304  249055 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:08.203993  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:08.217645  249055 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233 for IP: 192.168.39.2
	I1031 00:13:08.217692  249055 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:08.217873  249055 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:08.217924  249055 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:08.218015  249055 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.key
	I1031 00:13:08.308243  249055 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key.dd3b77ed
	I1031 00:13:08.308354  249055 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key
	I1031 00:13:08.308540  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:08.308606  249055 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:08.308626  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:08.308652  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:08.308678  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:08.308701  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:08.308743  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:08.309489  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:08.339601  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:08.365873  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:08.393028  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:13:08.418983  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:08.445555  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:08.471234  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:08.496657  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:08.522698  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:08.546933  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:08.570645  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:08.596096  249055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:08.615431  249055 ssh_runner.go:195] Run: openssl version
	I1031 00:13:08.621901  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:08.633316  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638479  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638546  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.644750  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:08.656306  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:08.669978  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.675964  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.676033  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.682433  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:08.694215  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:08.706255  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713046  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713147  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.720902  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:08.732062  249055 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:08.737112  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:08.745040  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:08.753046  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:08.759410  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:08.765847  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:08.772651  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:08.779086  249055 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:08.779224  249055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:08.779292  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:08.832024  249055 cri.go:89] found id: ""
	I1031 00:13:08.832096  249055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:08.842618  249055 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:08.842641  249055 kubeadm.go:636] restartCluster start
	I1031 00:13:08.842716  249055 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:08.852209  249055 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.853480  249055 kubeconfig.go:92] found "default-k8s-diff-port-892233" server: "https://192.168.39.2:8444"
	I1031 00:13:08.855965  249055 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:08.865555  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.865617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.877258  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.877285  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.877332  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.887847  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:05.643929  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:05.643958  248387 pod_ready.go:81] duration metric: took 8.31111047s waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.643971  248387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:07.946810  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:06.186224  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:06.186916  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:06.186948  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:06.186867  249826 retry.go:31] will retry after 964.405003ms: waiting for machine to come up
	I1031 00:13:07.153455  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:07.153973  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:07.154006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:07.153917  249826 retry.go:31] will retry after 1.515980724s: waiting for machine to come up
	I1031 00:13:08.671700  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:08.672189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:08.672219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:08.672117  249826 retry.go:31] will retry after 2.254841495s: waiting for machine to come up
	I1031 00:13:09.658372  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:11.938230  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:12.439097  248718 node_ready.go:49] node "embed-certs-078843" has status "Ready":"True"
	I1031 00:13:12.439129  248718 node_ready.go:38] duration metric: took 7.101255254s waiting for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:12.439147  248718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:12.447673  248718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.469967  248718 pod_ready.go:92] pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.470002  248718 pod_ready.go:81] duration metric: took 22.292329ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.470017  248718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482061  248718 pod_ready.go:92] pod "etcd-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.482092  248718 pod_ready.go:81] duration metric: took 12.066806ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482106  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489019  248718 pod_ready.go:92] pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.489052  248718 pod_ready.go:81] duration metric: took 6.936171ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489066  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500686  248718 pod_ready.go:92] pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.500712  248718 pod_ready.go:81] duration metric: took 11.637946ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500722  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:09.388669  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.388776  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.400708  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:09.888027  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.888146  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.900678  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.388004  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.388114  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.403685  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.888198  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.888314  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.900608  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.388239  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.388363  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.404992  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.888425  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.888541  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.900436  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.388293  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.388418  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.404621  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.888037  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.888156  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.900860  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.388276  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.388371  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.400841  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.888124  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.888238  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.903041  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.168791  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:12.169662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.669047  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:10.928893  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:10.929414  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:10.929445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:10.929369  249826 retry.go:31] will retry after 2.792980456s: waiting for machine to come up
	I1031 00:13:13.724006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:13.724430  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:13.724469  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:13.724356  249826 retry.go:31] will retry after 2.555956413s: waiting for machine to come up
	I1031 00:13:12.838631  248718 pod_ready.go:92] pod "kube-proxy-287dq" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.838658  248718 pod_ready.go:81] duration metric: took 337.929955ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.838668  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239513  248718 pod_ready.go:92] pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:13.239541  248718 pod_ready.go:81] duration metric: took 400.86714ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239552  248718 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:15.546507  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.388661  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.388736  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.402388  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:14.888855  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.888965  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.903137  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.388757  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.388868  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.404412  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.888848  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.888984  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.902181  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.388790  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.388913  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.402283  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.888892  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.889035  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.900677  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.388842  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.388983  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.401399  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.888981  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.889099  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.901474  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.387997  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:18.388083  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:18.399745  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.866186  249055 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:18.866263  249055 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:18.866282  249055 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:18.866352  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:18.906125  249055 cri.go:89] found id: ""
	I1031 00:13:18.906214  249055 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:18.921555  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:18.930111  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:18.930193  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938516  249055 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938545  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:19.070700  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:17.167517  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:19.170710  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:16.282473  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:16.282944  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:16.282975  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:16.282900  249826 retry.go:31] will retry after 2.811414756s: waiting for machine to come up
	I1031 00:13:19.096338  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:19.096738  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:19.096760  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:19.096714  249826 retry.go:31] will retry after 3.844203493s: waiting for machine to come up
	I1031 00:13:17.548558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.047074  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.047691  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.139806  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069066882s)
	I1031 00:13:20.139847  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.337823  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.417915  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.499750  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:20.499831  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:20.515735  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.029420  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.529636  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.029757  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.529034  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.029479  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.055542  249055 api_server.go:72] duration metric: took 2.555800185s to wait for apiserver process to appear ...
	I1031 00:13:23.055573  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:23.055591  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:21.667545  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:24.167560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.943000  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.943492  248084 main.go:141] libmachine: (old-k8s-version-225140) Found IP for machine: 192.168.72.65
	I1031 00:13:22.943521  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserving static IP address...
	I1031 00:13:22.943540  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has current primary IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.944080  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.944120  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | skip adding static IP to network mk-old-k8s-version-225140 - found existing host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"}
	I1031 00:13:22.944139  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserved static IP address: 192.168.72.65
	I1031 00:13:22.944160  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Getting to WaitForSSH function...
	I1031 00:13:22.944168  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting for SSH to be available...
	I1031 00:13:22.946799  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.947222  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947416  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH client type: external
	I1031 00:13:22.947448  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa (-rw-------)
	I1031 00:13:22.947508  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:13:22.947534  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | About to run SSH command:
	I1031 00:13:22.947581  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | exit 0
	I1031 00:13:23.045850  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | SSH cmd err, output: <nil>: 
	I1031 00:13:23.046239  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetConfigRaw
	I1031 00:13:23.046996  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.050061  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050464  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.050496  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050789  248084 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/config.json ...
	I1031 00:13:23.051046  248084 machine.go:88] provisioning docker machine ...
	I1031 00:13:23.051070  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:23.051289  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051484  248084 buildroot.go:166] provisioning hostname "old-k8s-version-225140"
	I1031 00:13:23.051511  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051731  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.054157  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054603  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.054636  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054784  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.055085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055291  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055503  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.055718  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.056178  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.056203  248084 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-225140 && echo "old-k8s-version-225140" | sudo tee /etc/hostname
	I1031 00:13:23.184296  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-225140
	
	I1031 00:13:23.184356  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.187270  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187720  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.187761  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187895  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.188085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188228  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188340  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.188565  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.189104  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.189135  248084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-225140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-225140/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-225140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:13:23.315792  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:13:23.315829  248084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:13:23.315893  248084 buildroot.go:174] setting up certificates
	I1031 00:13:23.315906  248084 provision.go:83] configureAuth start
	I1031 00:13:23.315921  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.316224  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.319690  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320111  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.320143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320315  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.322897  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323334  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.323362  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323720  248084 provision.go:138] copyHostCerts
	I1031 00:13:23.323803  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:13:23.323820  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:13:23.323895  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:13:23.324025  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:13:23.324043  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:13:23.324080  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:13:23.324257  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:13:23.324272  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:13:23.324313  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:13:23.324415  248084 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-225140 san=[192.168.72.65 192.168.72.65 localhost 127.0.0.1 minikube old-k8s-version-225140]
	I1031 00:13:23.580836  248084 provision.go:172] copyRemoteCerts
	I1031 00:13:23.580905  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:23.580929  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.584088  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584527  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.584576  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584872  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.585115  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.585290  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.585440  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:23.680241  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1031 00:13:23.706003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:13:23.730993  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:23.760873  248084 provision.go:86] duration metric: configureAuth took 444.934236ms
	I1031 00:13:23.760909  248084 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:23.761208  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:13:23.761370  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.764798  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.765273  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765411  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.765646  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.765868  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.766036  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.766256  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.766762  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.766796  248084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:24.109914  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:24.109946  248084 machine.go:91] provisioned docker machine in 1.058882555s
	I1031 00:13:24.109958  248084 start.go:300] post-start starting for "old-k8s-version-225140" (driver="kvm2")
	I1031 00:13:24.109972  248084 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:24.109994  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.110392  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:24.110456  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.113825  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114298  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.114335  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114587  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.114814  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.114989  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.115148  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.206997  248084 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:24.211439  248084 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:24.211467  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:24.211551  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:24.211635  248084 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:24.211722  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:24.219976  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:24.246337  248084 start.go:303] post-start completed in 136.360652ms
	I1031 00:13:24.246366  248084 fix.go:56] fixHost completed within 23.427336969s
	I1031 00:13:24.246389  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.249547  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.249876  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.249919  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.250099  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.250300  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250603  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250815  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.251022  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:24.251387  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:24.251413  248084 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:24.366477  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711204.302770779
	
	I1031 00:13:24.366499  248084 fix.go:206] guest clock: 1698711204.302770779
	I1031 00:13:24.366507  248084 fix.go:219] Guest: 2023-10-31 00:13:24.302770779 +0000 UTC Remote: 2023-10-31 00:13:24.246369619 +0000 UTC m=+368.452785688 (delta=56.40116ms)
	I1031 00:13:24.366558  248084 fix.go:190] guest clock delta is within tolerance: 56.40116ms
	I1031 00:13:24.366570  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 23.547580429s
	I1031 00:13:24.366599  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.366871  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:24.369640  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.369985  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.370032  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.370155  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370695  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370910  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370996  248084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:24.371044  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.371205  248084 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:24.371233  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.373962  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374315  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374349  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374379  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374621  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.374759  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374796  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.374822  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374952  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375018  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.375140  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.375139  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.375278  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375383  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.490387  248084 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:24.497758  248084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:24.645967  248084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:24.652716  248084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:24.652795  248084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:24.668415  248084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:24.668446  248084 start.go:472] detecting cgroup driver to use...
	I1031 00:13:24.668513  248084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:24.683255  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:24.697242  248084 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:24.697295  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:24.710554  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:24.725562  248084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:24.847447  248084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:24.982382  248084 docker.go:214] disabling docker service ...
	I1031 00:13:24.982477  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:24.998270  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:25.011136  248084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:25.129421  248084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:25.258387  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:25.271528  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:25.291702  248084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1031 00:13:25.291788  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.301762  248084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:25.301826  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.311900  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.322111  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.331429  248084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:25.344907  248084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:25.354397  248084 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:25.354463  248084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:25.367335  248084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:25.376415  248084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:25.493551  248084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:25.677504  248084 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:25.677648  248084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:25.683882  248084 start.go:540] Will wait 60s for crictl version
	I1031 00:13:25.683952  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:25.687748  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:25.729230  248084 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:25.729316  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.782619  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.832400  248084 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1031 00:13:25.833898  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:25.836924  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837347  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:25.837372  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837666  248084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:25.841940  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:24.051460  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.554325  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.499116  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.499157  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:26.499172  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:26.509898  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.509929  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:27.010543  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.024054  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.024104  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:27.510303  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.518621  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.518658  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:28.010147  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:28.017834  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:13:28.027903  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:28.028005  249055 api_server.go:131] duration metric: took 4.972421145s to wait for apiserver health ...
	I1031 00:13:28.028033  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:28.028070  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:28.030427  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:28.032020  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:28.042889  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:28.084357  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:28.114368  249055 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:28.114416  249055 system_pods.go:61] "coredns-5dd5756b68-6sbs7" [4cf52749-359c-42b7-a985-d2cdc3f20700] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:28.114430  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [75c06d7d-877d-4df8-9805-0ea50aec938f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:28.114440  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [6eb1d4f8-0594-4992-962c-383062853ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:28.114460  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [8b5e8ab9-34fe-4337-95d1-554adbd23505] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:28.114470  249055 system_pods.go:61] "kube-proxy-jn2j8" [23f4d9d7-61a0-43d9-a815-a4ce10a568e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:28.114479  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [dcb7e68d-4e3d-4e46-935a-1372309ad89c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:28.114488  249055 system_pods.go:61] "metrics-server-57f55c9bc5-7klqw" [3f832e2c-81b4-431e-b1a2-987057fdae0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:28.114502  249055 system_pods.go:61] "storage-provisioner" [b912cf02-280b-47e0-8e72-fd22566a40f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:28.114515  249055 system_pods.go:74] duration metric: took 30.127265ms to wait for pod list to return data ...
	I1031 00:13:28.114534  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:28.126920  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:28.126971  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:28.127018  249055 node_conditions.go:105] duration metric: took 12.476154ms to run NodePressure ...
	I1031 00:13:28.127048  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:28.402286  249055 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407352  249055 kubeadm.go:787] kubelet initialised
	I1031 00:13:28.407384  249055 kubeadm.go:788] duration metric: took 5.069821ms waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407397  249055 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:28.413100  249055 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:26.174532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:28.667350  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:25.856078  248084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1031 00:13:25.856136  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:25.913612  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:25.913733  248084 ssh_runner.go:195] Run: which lz4
	I1031 00:13:25.918632  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:25.923981  248084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:25.924014  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1031 00:13:27.712494  248084 crio.go:444] Took 1.793896 seconds to copy over tarball
	I1031 00:13:27.712615  248084 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:29.050835  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.549536  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.457173  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.255838  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.667667  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.167250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.207204  248084 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.494544747s)
	I1031 00:13:31.207238  248084 crio.go:451] Took 3.494710 seconds to extract the tarball
	I1031 00:13:31.207250  248084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:31.253648  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:31.312599  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:31.312624  248084 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:13:31.312719  248084 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.312753  248084 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.312763  248084 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.312776  248084 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1031 00:13:31.312705  248084 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.313005  248084 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.313122  248084 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.312926  248084 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314301  248084 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314408  248084 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.314826  248084 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.314863  248084 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.314835  248084 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.314877  248084 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.314888  248084 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.314904  248084 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1031 00:13:31.492117  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.493373  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.506179  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.506237  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1031 00:13:31.510547  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.515827  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.524137  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.614442  248084 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1031 00:13:31.614494  248084 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.614544  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.622661  248084 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1031 00:13:31.622718  248084 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.622770  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.630473  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.674058  248084 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1031 00:13:31.674111  248084 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.674161  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.707251  248084 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1031 00:13:31.707293  248084 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1031 00:13:31.707337  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1031 00:13:31.719006  248084 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.719008  248084 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1031 00:13:31.719056  248084 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.719072  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719084  248084 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.719111  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719119  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.719139  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719176  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.866787  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.866815  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1031 00:13:31.866818  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.866883  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1031 00:13:31.866887  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.866936  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1031 00:13:31.867046  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.993265  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1031 00:13:31.993505  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1031 00:13:31.993999  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1031 00:13:31.994045  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1031 00:13:31.994063  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1031 00:13:31.994123  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999020  248084 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1031 00:13:31.999034  248084 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999068  248084 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1031 00:13:33.460498  248084 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461402246s)
	I1031 00:13:33.460530  248084 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1031 00:13:33.460582  248084 cache_images.go:92] LoadImages completed in 2.147945804s
	W1031 00:13:33.460661  248084 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1031 00:13:33.460749  248084 ssh_runner.go:195] Run: crio config
	I1031 00:13:33.528812  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:33.528838  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:33.528865  248084 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:33.528895  248084 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.65 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-225140 NodeName:old-k8s-version-225140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1031 00:13:33.529103  248084 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-225140"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-225140
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.65:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:33.529205  248084 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-225140 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:13:33.529276  248084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1031 00:13:33.539328  248084 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:33.539424  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:33.551543  248084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1031 00:13:33.569095  248084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:33.586561  248084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1031 00:13:33.605084  248084 ssh_runner.go:195] Run: grep 192.168.72.65	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:33.609322  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:33.623527  248084 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140 for IP: 192.168.72.65
	I1031 00:13:33.623556  248084 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:33.623768  248084 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:33.623817  248084 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:33.623919  248084 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.key
	I1031 00:13:33.624000  248084 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key.fa85241c
	I1031 00:13:33.624074  248084 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key
	I1031 00:13:33.624223  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:33.624267  248084 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:33.624285  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:33.624333  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:33.624377  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:33.624409  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:33.624480  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:33.625311  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:33.648457  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:33.673383  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:33.701679  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:13:33.725823  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:33.748912  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:33.777397  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:33.803003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:33.827749  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:33.850011  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:33.871722  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:33.894663  248084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:33.912130  248084 ssh_runner.go:195] Run: openssl version
	I1031 00:13:33.918010  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:33.928381  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933548  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933605  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.939344  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:33.950844  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:33.962585  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968178  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968244  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.975606  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:33.986565  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:33.998188  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.003940  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.004012  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.010088  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:34.022223  248084 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:34.028537  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:34.036319  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:34.043481  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:34.051269  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:34.058129  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:34.065473  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:34.072663  248084 kubeadm.go:404] StartCluster: {Name:old-k8s-version-225140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:34.072781  248084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:34.072830  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:34.121758  248084 cri.go:89] found id: ""
	I1031 00:13:34.121848  248084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:34.135357  248084 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:34.135392  248084 kubeadm.go:636] restartCluster start
	I1031 00:13:34.135469  248084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:34.145173  248084 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.146905  248084 kubeconfig.go:92] found "old-k8s-version-225140" server: "https://192.168.72.65:8443"
	I1031 00:13:34.150660  248084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:34.163037  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.163119  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.184414  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.184441  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.184586  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.197787  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.698120  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.698246  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.710874  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.198312  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.198384  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.210933  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.698108  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.698210  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.710184  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:33.551354  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.048781  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.442171  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.941322  249055 pod_ready.go:92] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:36.941344  249055 pod_ready.go:81] duration metric: took 8.528221711s waiting for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:36.941353  249055 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:38.959679  249055 pod_ready.go:102] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.168250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:37.666699  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.198699  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.198787  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.211005  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:36.698612  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.698705  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.712106  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.198674  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.198779  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.211665  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.698160  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.698258  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.709798  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.198294  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.198410  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.210400  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.697965  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.698058  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.710188  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.198306  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.198435  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.210213  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.698867  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.698944  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.709958  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.198113  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.198217  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.209265  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.698424  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.698494  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.715194  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.548167  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.047378  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:39.959598  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.959625  249055 pod_ready.go:81] duration metric: took 3.018261782s waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.959638  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965182  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.965204  249055 pod_ready.go:81] duration metric: took 5.558563ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965218  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970258  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.970283  249055 pod_ready.go:81] duration metric: took 5.058027ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970293  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975183  249055 pod_ready.go:92] pod "kube-proxy-jn2j8" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.975202  249055 pod_ready.go:81] duration metric: took 4.903272ms waiting for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975209  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137875  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:40.137907  249055 pod_ready.go:81] duration metric: took 162.69035ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137921  249055 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:42.452793  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:40.167385  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:42.666396  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.198534  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.198640  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.210412  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:41.698420  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.698526  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.710324  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.198572  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.198649  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.210399  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.697932  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.698010  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.711010  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.198096  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.198182  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.209468  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.698864  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.698998  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.710735  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:44.163493  248084 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:44.163545  248084 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:44.163560  248084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:44.163621  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:44.204352  248084 cri.go:89] found id: ""
	I1031 00:13:44.204444  248084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:44.219641  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:44.228342  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:44.228420  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237058  248084 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237081  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:44.369926  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.077715  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.306025  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.399572  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.537955  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:45.538046  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:45.554284  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:43.549424  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.052253  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:44.947118  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.954020  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:45.167622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:47.669895  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.073056  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:46.572408  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.072392  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.098617  248084 api_server.go:72] duration metric: took 1.560662194s to wait for apiserver process to appear ...
	I1031 00:13:47.098650  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:47.098673  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:48.547476  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:50.547537  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:49.446620  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:51.946346  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:53.949089  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.098997  248084 api_server.go:269] stopped: https://192.168.72.65:8443/healthz: Get "https://192.168.72.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1031 00:13:52.099073  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:52.709441  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:52.709490  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:53.210178  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.216374  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.216403  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:53.709935  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.717326  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.717361  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:54.209883  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:54.215985  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:13:54.224088  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:13:54.224115  248084 api_server.go:131] duration metric: took 7.125456227s to wait for apiserver health ...
	I1031 00:13:54.224127  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:54.224135  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:54.226152  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:50.168563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.669900  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.227723  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:54.239709  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:54.261391  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:54.273728  248084 system_pods.go:59] 7 kube-system pods found
	I1031 00:13:54.273761  248084 system_pods.go:61] "coredns-5644d7b6d9-2s6pc" [c77d23a4-28d0-4bbf-bb28-baff23fc4987] Running
	I1031 00:13:54.273775  248084 system_pods.go:61] "etcd-old-k8s-version-225140" [dcc629ce-f107-4d14-b69b-20228b00b7c5] Running
	I1031 00:13:54.273783  248084 system_pods.go:61] "kube-apiserver-old-k8s-version-225140" [38fd683e-51fa-40f0-a3c6-afdf57e14132] Running
	I1031 00:13:54.273791  248084 system_pods.go:61] "kube-controller-manager-old-k8s-version-225140" [29b1b9cb-1819-497e-b0f9-c008b0ac6e26] Running
	I1031 00:13:54.273803  248084 system_pods.go:61] "kube-proxy-fxz8t" [57ccd26e-cbcf-4ed3-adbe-778fd8bcf27c] Running
	I1031 00:13:54.273811  248084 system_pods.go:61] "kube-scheduler-old-k8s-version-225140" [d8d4d75c-25f8-4485-853c-8fa75105c6e2] Running
	I1031 00:13:54.273818  248084 system_pods.go:61] "storage-provisioner" [8fc76055-6a96-4884-8f91-b2d3f598bc88] Running
	I1031 00:13:54.273826  248084 system_pods.go:74] duration metric: took 12.417629ms to wait for pod list to return data ...
	I1031 00:13:54.273840  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:54.279056  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:54.279082  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:54.279094  248084 node_conditions.go:105] duration metric: took 5.248504ms to run NodePressure ...
	I1031 00:13:54.279111  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:54.594257  248084 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:54.600279  248084 retry.go:31] will retry after 287.663167ms: kubelet not initialised
	I1031 00:13:54.899142  248084 retry.go:31] will retry after 297.826066ms: kubelet not initialised
	I1031 00:13:55.205347  248084 retry.go:31] will retry after 797.709551ms: kubelet not initialised
	I1031 00:13:52.548142  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.548667  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.047942  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.446395  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:58.946167  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:55.167909  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.668179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:59.668339  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.009099  248084 retry.go:31] will retry after 571.448668ms: kubelet not initialised
	I1031 00:13:56.593388  248084 retry.go:31] will retry after 1.82270665s: kubelet not initialised
	I1031 00:13:58.421789  248084 retry.go:31] will retry after 1.094040234s: kubelet not initialised
	I1031 00:13:59.522021  248084 retry.go:31] will retry after 3.716569913s: kubelet not initialised
	I1031 00:13:59.549278  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.551103  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.446913  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.947203  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.668422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.668478  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.244381  248084 retry.go:31] will retry after 4.104024564s: kubelet not initialised
	I1031 00:14:04.048498  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.548070  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.447864  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.945886  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.166653  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.167008  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:07.354371  248084 retry.go:31] will retry after 9.18347873s: kubelet not initialised
	I1031 00:14:09.047421  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.048479  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.448689  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.948268  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:10.667348  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:12.667812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.052934  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.547846  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.446625  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:18.447872  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.167259  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:17.666670  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:19.667251  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.544997  248084 retry.go:31] will retry after 8.29261189s: kubelet not initialised
	I1031 00:14:17.550692  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.045758  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:22.047516  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.946805  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:23.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:21.667436  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.167210  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.843011  248084 retry.go:31] will retry after 15.309414425s: kubelet not initialised
	I1031 00:14:24.048197  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.546847  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:25.946796  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:27.950212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.167443  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.168482  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.548116  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:31.047187  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.446164  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.451487  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.666762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.667234  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:33.049216  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.545964  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:34.946961  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:36.947212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:38.949437  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.167751  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:37.668981  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:39.669233  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.157618  248084 kubeadm.go:787] kubelet initialised
	I1031 00:14:40.157647  248084 kubeadm.go:788] duration metric: took 45.563360213s waiting for restarted kubelet to initialise ...
	I1031 00:14:40.157660  248084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:14:40.163372  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169776  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.169798  248084 pod_ready.go:81] duration metric: took 6.398827ms waiting for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169806  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175023  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.175047  248084 pod_ready.go:81] duration metric: took 5.233827ms waiting for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175058  248084 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179248  248084 pod_ready.go:92] pod "etcd-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.179269  248084 pod_ready.go:81] duration metric: took 4.202967ms waiting for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179279  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183579  248084 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.183593  248084 pod_ready.go:81] duration metric: took 4.308627ms waiting for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183604  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558275  248084 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.558308  248084 pod_ready.go:81] duration metric: took 374.694908ms waiting for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558321  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:37.547289  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.047586  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:41.446752  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:43.447874  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.166207  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:44.167277  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.958069  248084 pod_ready.go:92] pod "kube-proxy-fxz8t" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.958099  248084 pod_ready.go:81] duration metric: took 399.768399ms waiting for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.958112  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358244  248084 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:41.358274  248084 pod_ready.go:81] duration metric: took 400.15381ms waiting for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358284  248084 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:43.666594  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.666948  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.547950  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.047306  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.946510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.946663  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:46.167952  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.667854  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.166448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.167022  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.547211  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:49.548100  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.548509  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.446801  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.447233  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.168676  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.667170  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.666608  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.667583  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.550528  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:56.050177  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.947677  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.447082  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:55.669616  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.170640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.165612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.168165  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.548441  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.047296  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.447626  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.947292  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:00.669772  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.665706  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.166609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.546708  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.547092  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.447672  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.449541  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:08.948333  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.667422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.669173  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.666325  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.165998  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.547133  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.547568  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.551676  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.946673  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:10.168209  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:12.666973  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.668147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.166824  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.665410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.046068  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.047803  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:15.946975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.445704  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:17.167480  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:19.668157  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.165876  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.166620  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.666455  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.549666  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:21.046823  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.447212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.947109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.167144  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.168041  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.667076  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.167164  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:23.047419  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.049728  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.947312  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.449246  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:26.669861  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.168519  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.666465  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.166123  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.547889  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.046604  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.048045  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.948497  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.446948  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:31.670479  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.167604  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.668009  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:35.165749  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.547533  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.048031  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.945337  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.947811  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.168180  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:38.170343  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.168053  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.665709  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.552108  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.047262  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.451699  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.946296  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:40.667428  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.668235  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.666624  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.166672  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.047729  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.549442  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.447109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.448250  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:48.947017  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:45.167138  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:47.666886  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.667907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.669428  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.166194  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.047526  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.049047  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:50.947410  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.446734  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:52.167771  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:54.167875  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.666228  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.667295  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.052036  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.547767  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.946776  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.446825  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.668562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:59.168110  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.167716  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.665487  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.668666  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.047770  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.047908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.048356  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.946590  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.947001  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:01.667160  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.167375  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:03.165171  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.166289  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.049788  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.547020  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.446511  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.449772  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.667622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:08.667665  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.166410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.166536  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.049966  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.547967  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.947975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:12.447789  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.168645  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667838  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.665962  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667117  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:15.667752  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.047716  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.048052  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.947264  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.947386  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.167045  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.668483  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:17.669275  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.167079  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.548369  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.548635  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:19.448662  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.947615  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.167164  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.167506  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:22.666820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.166614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.046392  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.548954  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:24.446814  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:26.945792  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:28.947133  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.167732  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.168662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.171362  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.169221  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.667206  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.550807  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:30.048391  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.448249  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.946336  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.667185  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.667628  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.165207  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:34.166237  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.546558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.046558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:37.047654  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.946896  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.449959  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.668366  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.168509  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:36.166529  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.666448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:39.552154  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.046335  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.946962  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.446383  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.666758  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.668031  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:41.168643  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.170216  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.666959  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:44.046908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:46.548312  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.947573  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.947914  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.166562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667578  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667903  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.166574  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.046763  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:51.047566  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.948510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.446760  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.168646  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.667122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.668132  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.168815  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.667713  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:53.546751  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:56.048217  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.947315  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.447727  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.169330  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.666819  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.166002  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.168109  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:58.548212  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.047033  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.448330  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.946970  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.667755  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.666457  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167186  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:03.546842  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.547488  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.445743  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:06.446624  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.451015  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.644115  248387 pod_ready.go:81] duration metric: took 4m0.000125657s waiting for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:05.644148  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:05.644168  248387 pod_ready.go:38] duration metric: took 4m9.241022532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:05.644198  248387 kubeadm.go:640] restartCluster took 4m28.058055798s
	W1031 00:17:05.644570  248387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:05.644685  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:06.168910  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.666612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.047998  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.547186  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.946940  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.455539  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:11.168678  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.667122  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.046682  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.240656  248718 pod_ready.go:81] duration metric: took 4m0.001083426s waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:13.240702  248718 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:13.240712  248718 pod_ready.go:38] duration metric: took 4m0.801552437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:13.240732  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:17:13.240766  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:13.240930  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:13.307072  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.307099  248718 cri.go:89] found id: ""
	I1031 00:17:13.307108  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:13.307180  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.312997  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:13.313067  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:13.364439  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:13.364474  248718 cri.go:89] found id: ""
	I1031 00:17:13.364485  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:13.364561  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.370120  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:13.370186  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:13.413937  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.413972  248718 cri.go:89] found id: ""
	I1031 00:17:13.413983  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:13.414051  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.420586  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:13.420669  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:13.476980  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:13.477008  248718 cri.go:89] found id: ""
	I1031 00:17:13.477028  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:13.477100  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.482874  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:13.482957  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:13.532196  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.532232  248718 cri.go:89] found id: ""
	I1031 00:17:13.532244  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:13.532314  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.539868  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:13.540017  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:13.595189  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:13.595218  248718 cri.go:89] found id: ""
	I1031 00:17:13.595231  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:13.595305  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.601429  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:13.601496  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:13.641957  248718 cri.go:89] found id: ""
	I1031 00:17:13.641984  248718 logs.go:284] 0 containers: []
	W1031 00:17:13.641992  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:13.641998  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:13.642053  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:13.683163  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.683193  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:13.683200  248718 cri.go:89] found id: ""
	I1031 00:17:13.683209  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:13.683266  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.689222  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.693814  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:13.693839  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:13.710167  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:13.710188  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.754241  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:13.754273  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.800473  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:13.800508  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.857072  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:13.857101  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.901072  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:13.901102  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:14.390850  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:14.390894  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:14.446107  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:14.446141  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:14.495337  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:14.495368  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:14.535558  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:14.535591  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:14.589637  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:14.589676  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:14.650509  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:14.650559  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:14.816331  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:14.816362  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.363336  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:17:17.378105  248718 api_server.go:72] duration metric: took 4m12.292425365s to wait for apiserver process to appear ...
	I1031 00:17:17.378131  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:17:17.378171  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:17.378234  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:17.424054  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:17.424082  248718 cri.go:89] found id: ""
	I1031 00:17:17.424091  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:17.424152  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.428185  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:17.428246  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:17.465132  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:17.465157  248718 cri.go:89] found id: ""
	I1031 00:17:17.465167  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:17.465219  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.469315  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:17.469392  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:17.504119  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:17.504140  248718 cri.go:89] found id: ""
	I1031 00:17:17.504151  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:17.504199  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:15.946464  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:17.949398  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:19.822838  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.178119551s)
	I1031 00:17:19.822927  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:19.838182  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:19.847738  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:19.857883  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:19.857939  248387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:19.911372  248387 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:19.911432  248387 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:20.091412  248387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:20.091582  248387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:20.091703  248387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:20.351519  248387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:16.166533  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:18.668258  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:20.353310  248387 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:20.353500  248387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:20.353598  248387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:20.353712  248387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:20.353809  248387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:20.353933  248387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:20.354050  248387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:20.354132  248387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:20.354241  248387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:20.354353  248387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:20.354596  248387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:20.355193  248387 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:20.355332  248387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:21.009329  248387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:21.145431  248387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:21.231013  248387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:21.384423  248387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:21.385066  248387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:21.387895  248387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:17.508240  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:17.510213  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:17.548666  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:17.548692  248718 cri.go:89] found id: ""
	I1031 00:17:17.548702  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:17.548768  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.552963  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:17.553029  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:17.593690  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:17.593728  248718 cri.go:89] found id: ""
	I1031 00:17:17.593739  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:17.593808  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.598269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:17.598325  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:17.637723  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:17.637750  248718 cri.go:89] found id: ""
	I1031 00:17:17.637761  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:17.637826  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.642006  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:17.642055  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:17.686659  248718 cri.go:89] found id: ""
	I1031 00:17:17.686687  248718 logs.go:284] 0 containers: []
	W1031 00:17:17.686695  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:17.686701  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:17.686766  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:17.732114  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:17.732147  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.732154  248718 cri.go:89] found id: ""
	I1031 00:17:17.732163  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:17.732232  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.737308  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.741981  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:17.742013  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:18.181024  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:18.181062  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:18.196483  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:18.196519  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:18.235422  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:18.235458  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:18.291366  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:18.291402  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:18.412906  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:18.412960  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:18.469631  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:18.469669  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:18.523997  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:18.524034  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:18.566490  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:18.566520  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:18.626106  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:18.626138  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:18.666341  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:18.666382  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:18.729380  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:18.729430  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:18.788148  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:18.788182  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.330782  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:17:21.338085  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:17:21.339623  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:17:21.339671  248718 api_server.go:131] duration metric: took 3.961531332s to wait for apiserver health ...
	I1031 00:17:21.339684  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:17:21.339718  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:21.339786  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:21.380659  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:21.380687  248718 cri.go:89] found id: ""
	I1031 00:17:21.380696  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:21.380760  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.385559  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:21.385626  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:21.431810  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:21.431841  248718 cri.go:89] found id: ""
	I1031 00:17:21.431851  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:21.431914  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.436489  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:21.436562  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:21.489003  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.489036  248718 cri.go:89] found id: ""
	I1031 00:17:21.489047  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:21.489109  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.493691  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:21.493765  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:21.533480  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:21.533507  248718 cri.go:89] found id: ""
	I1031 00:17:21.533518  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:21.533584  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.538269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:21.538358  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:21.589588  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:21.589621  248718 cri.go:89] found id: ""
	I1031 00:17:21.589632  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:21.589705  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.595927  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:21.596020  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:21.644705  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:21.644730  248718 cri.go:89] found id: ""
	I1031 00:17:21.644738  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:21.644797  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.649696  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:21.649762  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:21.696655  248718 cri.go:89] found id: ""
	I1031 00:17:21.696692  248718 logs.go:284] 0 containers: []
	W1031 00:17:21.696703  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:21.696711  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:21.696788  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:21.743499  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.743523  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:21.743528  248718 cri.go:89] found id: ""
	I1031 00:17:21.743535  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:21.743586  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.748625  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.753187  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:21.753223  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:21.768074  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:21.768115  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:21.913742  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:21.913782  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.966345  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:21.966394  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:22.004823  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:22.004857  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:22.059117  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:22.059147  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:22.117615  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:22.117655  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:22.160231  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:22.160275  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:20.445730  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:22.447412  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:21.390006  248387 out.go:204]   - Booting up control plane ...
	I1031 00:17:21.390170  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:21.390275  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:21.391130  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:21.408062  248387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:21.409190  248387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:21.409256  248387 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:21.565150  248387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:22.536881  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:22.536920  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:22.591993  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:22.592030  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:22.644262  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:22.644302  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:22.688848  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:22.688880  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:22.740390  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:22.740440  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:25.317640  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:17:25.317675  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.317682  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.317690  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.317696  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.317702  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.317709  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.317718  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.317728  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.317737  248718 system_pods.go:74] duration metric: took 3.978040466s to wait for pod list to return data ...
	I1031 00:17:25.317752  248718 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:17:25.320120  248718 default_sa.go:45] found service account: "default"
	I1031 00:17:25.320147  248718 default_sa.go:55] duration metric: took 2.387709ms for default service account to be created ...
	I1031 00:17:25.320156  248718 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:17:25.325979  248718 system_pods.go:86] 8 kube-system pods found
	I1031 00:17:25.326004  248718 system_pods.go:89] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.326009  248718 system_pods.go:89] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.326014  248718 system_pods.go:89] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.326018  248718 system_pods.go:89] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.326022  248718 system_pods.go:89] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.326025  248718 system_pods.go:89] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.326055  248718 system_pods.go:89] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.326079  248718 system_pods.go:89] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.326088  248718 system_pods.go:126] duration metric: took 5.92719ms to wait for k8s-apps to be running ...
	I1031 00:17:25.326097  248718 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:17:25.326148  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:25.342753  248718 system_svc.go:56] duration metric: took 16.646026ms WaitForService to wait for kubelet.
	I1031 00:17:25.342775  248718 kubeadm.go:581] duration metric: took 4m20.257105243s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:17:25.342793  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:17:25.348257  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:17:25.348315  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:17:25.348379  248718 node_conditions.go:105] duration metric: took 5.579398ms to run NodePressure ...
	I1031 00:17:25.348413  248718 start.go:228] waiting for startup goroutines ...
	I1031 00:17:25.348426  248718 start.go:233] waiting for cluster config update ...
	I1031 00:17:25.348440  248718 start.go:242] writing updated cluster config ...
	I1031 00:17:25.349022  248718 ssh_runner.go:195] Run: rm -f paused
	I1031 00:17:25.415112  248718 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:17:25.418179  248718 out.go:177] * Done! kubectl is now configured to use "embed-certs-078843" cluster and "default" namespace by default
	I1031 00:17:21.166338  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:23.666609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:24.447530  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:26.947352  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:29.570822  248387 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004974 seconds
	I1031 00:17:29.570964  248387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:17:29.587033  248387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:17:30.119470  248387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:17:30.119696  248387 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-640155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:17:30.635312  248387 kubeadm.go:322] [bootstrap-token] Using token: cwaa4b.bqwxrocs0j7ngn44
	I1031 00:17:26.166271  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:28.664576  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.664963  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.636717  248387 out.go:204]   - Configuring RBAC rules ...
	I1031 00:17:30.636873  248387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:17:30.642895  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:17:30.651729  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:17:30.655472  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:17:30.659228  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:17:30.668748  248387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:17:30.690255  248387 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:17:30.950445  248387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:17:31.051453  248387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:17:31.051475  248387 kubeadm.go:322] 
	I1031 00:17:31.051536  248387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:17:31.051583  248387 kubeadm.go:322] 
	I1031 00:17:31.051709  248387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:17:31.051728  248387 kubeadm.go:322] 
	I1031 00:17:31.051767  248387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:17:31.051843  248387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:17:31.051930  248387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:17:31.051943  248387 kubeadm.go:322] 
	I1031 00:17:31.052013  248387 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:17:31.052024  248387 kubeadm.go:322] 
	I1031 00:17:31.052104  248387 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:17:31.052130  248387 kubeadm.go:322] 
	I1031 00:17:31.052191  248387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:17:31.052280  248387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:17:31.052375  248387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:17:31.052383  248387 kubeadm.go:322] 
	I1031 00:17:31.052485  248387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:17:31.052578  248387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:17:31.052612  248387 kubeadm.go:322] 
	I1031 00:17:31.052744  248387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.052900  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:17:31.052957  248387 kubeadm.go:322] 	--control-plane 
	I1031 00:17:31.052969  248387 kubeadm.go:322] 
	I1031 00:17:31.053092  248387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:17:31.053107  248387 kubeadm.go:322] 
	I1031 00:17:31.053217  248387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.053359  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:17:31.053517  248387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:17:31.053540  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:17:31.053552  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:17:31.055477  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:17:29.447694  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.449117  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:33.947759  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.056845  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:17:31.095104  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:17:31.131198  248387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:17:31.131322  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.131337  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=no-preload-640155 minikube.k8s.io/updated_at=2023_10_31T00_17_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.581951  248387 ops.go:34] apiserver oom_adj: -16
	I1031 00:17:31.582010  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.741330  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.350182  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.850643  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.350205  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.850216  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.349583  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.666281  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.168579  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:36.449644  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:38.946898  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.350661  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:35.850301  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.349673  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.849749  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.349755  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.850628  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.350204  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.849697  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.350194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.850027  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.667083  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.166305  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.349747  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:40.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.350476  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.850214  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.350555  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.850295  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.350645  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.679529  248387 kubeadm.go:1081] duration metric: took 12.548274555s to wait for elevateKubeSystemPrivileges.
	I1031 00:17:43.679561  248387 kubeadm.go:406] StartCluster complete in 5m6.156207823s
	I1031 00:17:43.679585  248387 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.679674  248387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:17:43.682045  248387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.684483  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:17:43.684785  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:17:43.684856  248387 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:17:43.684927  248387 addons.go:69] Setting storage-provisioner=true in profile "no-preload-640155"
	I1031 00:17:43.685036  248387 addons.go:231] Setting addon storage-provisioner=true in "no-preload-640155"
	W1031 00:17:43.685063  248387 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:17:43.685159  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685323  248387 addons.go:69] Setting metrics-server=true in profile "no-preload-640155"
	I1031 00:17:43.685339  248387 addons.go:231] Setting addon metrics-server=true in "no-preload-640155"
	W1031 00:17:43.685356  248387 addons.go:240] addon metrics-server should already be in state true
	I1031 00:17:43.685395  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685653  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685706  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.685893  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685978  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.686168  248387 addons.go:69] Setting default-storageclass=true in profile "no-preload-640155"
	I1031 00:17:43.686191  248387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-640155"
	I1031 00:17:43.686545  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.686651  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.705002  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1031 00:17:43.705181  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1031 00:17:43.705556  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706410  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706515  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.706543  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.706893  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I1031 00:17:43.706968  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.707139  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.707141  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.707157  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.707503  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.708166  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.708183  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.708236  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.708752  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.708783  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.709044  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.709715  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.709762  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.711511  248387 addons.go:231] Setting addon default-storageclass=true in "no-preload-640155"
	W1031 00:17:43.711525  248387 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:17:43.711553  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.711887  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.711927  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.730687  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1031 00:17:43.731513  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.732184  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.732205  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.732737  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.733201  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.734567  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I1031 00:17:43.734708  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I1031 00:17:43.735166  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.735665  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.735687  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.736245  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.736325  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.736490  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.736559  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.737461  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.739478  248387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:17:43.737480  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.738913  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.741138  248387 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.741154  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:17:43.741176  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.742564  248387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:17:43.741663  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.744300  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:17:43.744312  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:17:43.744326  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.744413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.745065  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.745106  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.753076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753082  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753110  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753196  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753200  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753235  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753249  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753282  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753376  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753469  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753527  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753624  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.753739  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.770481  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I1031 00:17:43.770925  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.773191  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.773223  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.773636  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.773840  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.775633  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.775954  248387 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:43.775969  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:17:43.775988  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.778552  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.778797  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.778823  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.779021  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.779204  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.779386  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.779683  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.936171  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.958064  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:17:43.958098  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:17:43.967116  248387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-640155" context rescaled to 1 replicas
	I1031 00:17:43.967170  248387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:17:43.969408  248387 out.go:177] * Verifying Kubernetes components...
	I1031 00:17:40.138062  249055 pod_ready.go:81] duration metric: took 4m0.000119587s waiting for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:40.138098  249055 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:40.138122  249055 pod_ready.go:38] duration metric: took 4m11.730710605s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:40.138164  249055 kubeadm.go:640] restartCluster took 4m31.295508075s
	W1031 00:17:40.138262  249055 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:40.138297  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:43.970897  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:43.997796  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:44.038710  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:17:44.038738  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:17:44.075299  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:44.075333  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:17:44.084795  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:17:44.172770  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:42.670020  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:45.165914  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:46.365906  248387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.39492875s)
	I1031 00:17:46.365968  248387 node_ready.go:35] waiting up to 6m0s for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.365998  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.368158747s)
	I1031 00:17:46.366066  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366074  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.281185782s)
	I1031 00:17:46.366103  248387 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1031 00:17:46.366086  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366354  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.430149836s)
	I1031 00:17:46.366390  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366402  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366600  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366612  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366622  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366631  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366682  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.366732  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366742  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366751  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366761  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.368921  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.368922  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.368958  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.369248  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.369293  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.369307  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.375988  248387 node_ready.go:49] node "no-preload-640155" has status "Ready":"True"
	I1031 00:17:46.376021  248387 node_ready.go:38] duration metric: took 10.036603ms waiting for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.376036  248387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:46.401563  248387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:46.425939  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.253121961s)
	I1031 00:17:46.426019  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.426035  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427461  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427471  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427488  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427498  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.427508  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427894  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427943  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427954  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427971  248387 addons.go:467] Verifying addon metrics-server=true in "no-preload-640155"
	I1031 00:17:46.436605  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.436630  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.436927  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.436959  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.436987  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.438529  248387 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1031 00:17:46.439869  248387 addons.go:502] enable addons completed in 2.755015847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1031 00:17:48.527903  248387 pod_ready.go:92] pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.527939  248387 pod_ready.go:81] duration metric: took 2.126335033s waiting for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.527954  248387 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544043  248387 pod_ready.go:92] pod "etcd-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.544070  248387 pod_ready.go:81] duration metric: took 16.106665ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544085  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552043  248387 pod_ready.go:92] pod "kube-apiserver-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.552075  248387 pod_ready.go:81] duration metric: took 7.981099ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552092  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563073  248387 pod_ready.go:92] pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.563112  248387 pod_ready.go:81] duration metric: took 11.009619ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563128  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771051  248387 pod_ready.go:92] pod "kube-proxy-pkjsl" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.771080  248387 pod_ready.go:81] duration metric: took 207.944354ms waiting for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771090  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170323  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:49.170354  248387 pod_ready.go:81] duration metric: took 399.25516ms waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170369  248387 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:47.166417  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:49.665614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:51.479213  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.979583  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:54.802281  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.663950968s)
	I1031 00:17:54.802401  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:54.818228  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:54.829802  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:54.841203  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:54.841254  249055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:54.900359  249055 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:54.900453  249055 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:55.068403  249055 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:55.068563  249055 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:55.068676  249055 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:55.316737  249055 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:51.665839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.666626  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:55.319016  249055 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:55.319172  249055 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:55.319275  249055 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:55.319395  249055 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:55.319481  249055 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:55.319603  249055 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:55.320419  249055 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:55.320814  249055 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:55.321700  249055 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:55.322211  249055 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:55.322708  249055 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:55.323252  249055 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:55.323344  249055 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:55.388450  249055 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:55.461692  249055 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:55.807861  249055 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:55.963028  249055 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:55.963510  249055 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:55.966001  249055 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:55.967951  249055 out.go:204]   - Booting up control plane ...
	I1031 00:17:55.968125  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:55.968238  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:55.968343  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:55.989357  249055 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:55.990439  249055 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:55.990548  249055 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:56.126548  249055 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:56.479126  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.479232  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:56.166722  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.667319  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:00.980893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.481571  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:04.629984  249055 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502923 seconds
	I1031 00:18:04.630137  249055 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:04.643529  249055 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:05.178336  249055 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:05.178549  249055 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-892233 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:18:05.695447  249055 kubeadm.go:322] [bootstrap-token] Using token: g00nr2.87o2mnv2u0jwf81d
	I1031 00:18:01.165232  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.166303  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.664899  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.696918  249055 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:05.697075  249055 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:05.706237  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:18:05.720767  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:05.731239  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:05.736130  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:05.740949  249055 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:05.759998  249055 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:18:06.051798  249055 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:06.118986  249055 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:06.119014  249055 kubeadm.go:322] 
	I1031 00:18:06.119078  249055 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:06.119084  249055 kubeadm.go:322] 
	I1031 00:18:06.119179  249055 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:06.119190  249055 kubeadm.go:322] 
	I1031 00:18:06.119225  249055 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:06.119282  249055 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:06.119326  249055 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:06.119332  249055 kubeadm.go:322] 
	I1031 00:18:06.119376  249055 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:18:06.119382  249055 kubeadm.go:322] 
	I1031 00:18:06.119424  249055 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:18:06.119435  249055 kubeadm.go:322] 
	I1031 00:18:06.119484  249055 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:06.119551  249055 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:06.119677  249055 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:06.119703  249055 kubeadm.go:322] 
	I1031 00:18:06.119830  249055 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:18:06.119938  249055 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:06.119957  249055 kubeadm.go:322] 
	I1031 00:18:06.120024  249055 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120179  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:06.120208  249055 kubeadm.go:322] 	--control-plane 
	I1031 00:18:06.120219  249055 kubeadm.go:322] 
	I1031 00:18:06.120330  249055 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:06.120368  249055 kubeadm.go:322] 
	I1031 00:18:06.120468  249055 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120559  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:06.121091  249055 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:06.121119  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:18:06.121127  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:06.123073  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:06.124566  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:06.140064  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:06.171195  249055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:06.171343  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.171359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=default-k8s-diff-port-892233 minikube.k8s.io/updated_at=2023_10_31T00_18_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.256957  249055 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:06.637700  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.769942  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.383359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.883621  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.384017  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.883751  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:05.979125  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.979280  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.981296  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.666495  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:10.165765  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.383896  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:09.883523  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.384077  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.883546  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.383417  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.883493  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.384043  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.884000  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.383479  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.884100  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.479614  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.978890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:12.666054  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:15.163419  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.384001  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:14.884297  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.383607  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.883617  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.383591  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.884141  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.384112  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.884196  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.384156  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.883687  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:19.114222  249055 kubeadm.go:1081] duration metric: took 12.942949327s to wait for elevateKubeSystemPrivileges.
	I1031 00:18:19.114261  249055 kubeadm.go:406] StartCluster complete in 5m10.335188993s
	I1031 00:18:19.114295  249055 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.114401  249055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:18:19.116632  249055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.116971  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:18:19.117107  249055 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:18:19.117188  249055 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117202  249055 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117221  249055 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117231  249055 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:19.117239  249055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-892233"
	W1031 00:18:19.117243  249055 addons.go:240] addon metrics-server should already be in state true
	I1031 00:18:19.117265  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:18:19.117305  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117213  249055 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.117326  249055 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:18:19.117372  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117740  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117746  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117761  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117830  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.134384  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I1031 00:18:19.134426  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I1031 00:18:19.134810  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.134915  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.135437  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135461  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.135648  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135675  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.136018  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136074  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136578  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.136625  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.137167  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.137198  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.144184  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I1031 00:18:19.144763  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.145263  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.145293  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.145648  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.145852  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.152132  249055 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.152194  249055 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:18:19.152240  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.152775  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.152867  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.154334  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I1031 00:18:19.155862  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1031 00:18:19.157267  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.158677  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.158735  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.158863  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.164983  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.165014  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.165044  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166267  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166284  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.169122  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.169199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.174627  249055 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:18:19.170934  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.176219  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:18:19.177591  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:18:19.177619  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.179052  249055 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:18:19.176693  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45785
	I1031 00:18:19.178184  249055 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-892233" context rescaled to 1 replicas
	I1031 00:18:19.179171  249055 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:18:19.181526  249055 out.go:177] * Verifying Kubernetes components...
	I1031 00:18:19.182930  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:16.980163  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:18.981179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:17.165555  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.174245  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.181603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.184667  249055 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.184676  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.184683  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:18:19.184698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.179546  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.184702  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.182398  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.184914  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.185097  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.185743  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.185761  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.185827  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.186516  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.187946  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.187988  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.188014  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.188359  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.188374  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.188549  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.188757  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.189003  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.189160  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.203564  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1031 00:18:19.203935  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.204374  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.204399  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.204741  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.204994  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.207012  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.207266  249055 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.207283  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:18:19.207302  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.209950  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210314  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.210332  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210507  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.210701  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.210830  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.210962  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.423829  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:18:19.423852  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:18:19.440581  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.466961  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.511517  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:18:19.511543  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:18:19.591560  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.591588  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:18:19.628414  249055 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.628560  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:18:19.648329  249055 node_ready.go:49] node "default-k8s-diff-port-892233" has status "Ready":"True"
	I1031 00:18:19.648353  249055 node_ready.go:38] duration metric: took 19.904402ms waiting for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.648364  249055 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:19.658333  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.692147  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.904902  249055 pod_ready.go:102] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:22.104924  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.637923019s)
	I1031 00:18:22.104999  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.104997  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.664373813s)
	I1031 00:18:22.105008  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476413511s)
	I1031 00:18:22.105035  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105013  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105052  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105035  249055 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 00:18:22.105350  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105366  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105376  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105388  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105479  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Closing plugin on server side
	I1031 00:18:22.105541  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105554  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105573  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105594  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105821  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105852  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105860  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105870  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.146205  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.146231  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.146598  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.146631  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.219948  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.561551335s)
	I1031 00:18:22.220017  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220033  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220412  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220441  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220459  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220474  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220820  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220840  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220853  249055 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:22.222793  249055 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:18:22.224194  249055 addons.go:502] enable addons completed in 3.107083845s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:18:22.880805  249055 pod_ready.go:92] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:22.880840  249055 pod_ready.go:81] duration metric: took 3.18866819s waiting for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:22.880853  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912036  249055 pod_ready.go:92] pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.912066  249055 pod_ready.go:81] duration metric: took 1.031204489s waiting for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912079  249055 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918589  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.918609  249055 pod_ready.go:81] duration metric: took 6.523247ms waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918619  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925040  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.925059  249055 pod_ready.go:81] duration metric: took 6.434141ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925067  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073002  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.073029  249055 pod_ready.go:81] duration metric: took 147.953037ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073044  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.478451  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.479849  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:24.473158  249055 pod_ready.go:92] pod "kube-proxy-77gzz" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.473184  249055 pod_ready.go:81] duration metric: took 400.13282ms waiting for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.473194  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873506  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.873528  249055 pod_ready.go:81] duration metric: took 400.328112ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873538  249055 pod_ready.go:38] duration metric: took 5.225163782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:24.873558  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:18:24.873617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:18:24.890474  249055 api_server.go:72] duration metric: took 5.711236569s to wait for apiserver process to appear ...
	I1031 00:18:24.890508  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:18:24.890533  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:18:24.896826  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:18:24.898203  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:18:24.898226  249055 api_server.go:131] duration metric: took 7.708512ms to wait for apiserver health ...
	I1031 00:18:24.898234  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:18:25.076806  249055 system_pods.go:59] 9 kube-system pods found
	I1031 00:18:25.076835  249055 system_pods.go:61] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.076840  249055 system_pods.go:61] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.076845  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.076850  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.076854  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.076857  249055 system_pods.go:61] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.076861  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.076868  249055 system_pods.go:61] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.076874  249055 system_pods.go:61] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.076882  249055 system_pods.go:74] duration metric: took 178.64211ms to wait for pod list to return data ...
	I1031 00:18:25.076889  249055 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:18:25.272531  249055 default_sa.go:45] found service account: "default"
	I1031 00:18:25.272557  249055 default_sa.go:55] duration metric: took 195.662215ms for default service account to be created ...
	I1031 00:18:25.272567  249055 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:18:25.477225  249055 system_pods.go:86] 9 kube-system pods found
	I1031 00:18:25.477258  249055 system_pods.go:89] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.477266  249055 system_pods.go:89] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.477275  249055 system_pods.go:89] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.477282  249055 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.477292  249055 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.477298  249055 system_pods.go:89] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.477309  249055 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.477323  249055 system_pods.go:89] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.477333  249055 system_pods.go:89] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.477343  249055 system_pods.go:126] duration metric: took 204.769317ms to wait for k8s-apps to be running ...
	I1031 00:18:25.477356  249055 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:18:25.477416  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:25.494054  249055 system_svc.go:56] duration metric: took 16.688482ms WaitForService to wait for kubelet.
	I1031 00:18:25.494079  249055 kubeadm.go:581] duration metric: took 6.314858374s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:18:25.494097  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:18:25.673698  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:18:25.673729  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:18:25.673742  249055 node_conditions.go:105] duration metric: took 179.63938ms to run NodePressure ...
	I1031 00:18:25.673756  249055 start.go:228] waiting for startup goroutines ...
	I1031 00:18:25.673764  249055 start.go:233] waiting for cluster config update ...
	I1031 00:18:25.673778  249055 start.go:242] writing updated cluster config ...
	I1031 00:18:25.674107  249055 ssh_runner.go:195] Run: rm -f paused
	I1031 00:18:25.729477  249055 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:18:25.731433  249055 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-892233" cluster and "default" namespace by default
	I1031 00:18:21.666578  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.667065  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:25.980194  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:27.983361  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:26.166839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:28.664820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.665038  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.478938  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:32.980862  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:33.164907  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.165601  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.479491  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.979837  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.167604  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.665586  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.982368  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:44.476905  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.359122  248084 pod_ready.go:81] duration metric: took 4m0.000818862s waiting for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	E1031 00:18:41.359173  248084 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:18:41.359193  248084 pod_ready.go:38] duration metric: took 4m1.201522433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:41.359227  248084 kubeadm.go:640] restartCluster took 5m7.223824608s
	W1031 00:18:41.359305  248084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:18:41.359335  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:18:46.480820  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:48.487440  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:46.413914  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.054544075s)
	I1031 00:18:46.414001  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:46.427362  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:18:46.436557  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:18:46.444929  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:18:46.445010  248084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1031 00:18:46.659252  248084 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:50.978966  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:52.980133  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.061122  248084 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1031 00:18:59.061211  248084 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:18:59.061324  248084 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:18:59.061476  248084 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:18:59.061695  248084 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:18:59.061861  248084 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:18:59.061989  248084 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:18:59.062059  248084 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1031 00:18:59.062158  248084 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:18:59.063991  248084 out.go:204]   - Generating certificates and keys ...
	I1031 00:18:59.064091  248084 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:18:59.064178  248084 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:18:59.064261  248084 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:18:59.064320  248084 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:18:59.064400  248084 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:18:59.064478  248084 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:18:59.064590  248084 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:18:59.064687  248084 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:18:59.064777  248084 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:18:59.064884  248084 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:18:59.064967  248084 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:18:59.065056  248084 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:18:59.065123  248084 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:18:59.065199  248084 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:18:59.065284  248084 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:18:59.065375  248084 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:18:59.065483  248084 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:18:59.067362  248084 out.go:204]   - Booting up control plane ...
	I1031 00:18:59.067477  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:18:59.067584  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:18:59.067655  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:18:59.067761  248084 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:18:59.067952  248084 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:18:59.068089  248084 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004306 seconds
	I1031 00:18:59.068174  248084 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:59.068330  248084 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:59.068419  248084 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:59.068536  248084 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-225140 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1031 00:18:59.068585  248084 kubeadm.go:322] [bootstrap-token] Using token: 1g4jse.zc5opkcf3va44z15
	I1031 00:18:59.070040  248084 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:59.070142  248084 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:59.070305  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:59.070451  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:59.070569  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:59.070657  248084 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:59.070700  248084 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:59.070742  248084 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:59.070748  248084 kubeadm.go:322] 
	I1031 00:18:59.070799  248084 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:59.070809  248084 kubeadm.go:322] 
	I1031 00:18:59.070900  248084 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:59.070912  248084 kubeadm.go:322] 
	I1031 00:18:59.070933  248084 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:59.070983  248084 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:59.071030  248084 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:59.071035  248084 kubeadm.go:322] 
	I1031 00:18:59.071082  248084 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:59.071158  248084 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:59.071269  248084 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:59.071278  248084 kubeadm.go:322] 
	I1031 00:18:59.071392  248084 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1031 00:18:59.071498  248084 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:59.071509  248084 kubeadm.go:322] 
	I1031 00:18:59.071608  248084 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.071749  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:59.071783  248084 kubeadm.go:322]     --control-plane 	  
	I1031 00:18:59.071793  248084 kubeadm.go:322] 
	I1031 00:18:59.071899  248084 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:59.071912  248084 kubeadm.go:322] 
	I1031 00:18:59.072051  248084 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.072196  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:59.072228  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:18:59.072243  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:59.073949  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:55.479295  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:57.983131  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.075900  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:59.087288  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:59.112130  248084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:59.112241  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.112258  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=old-k8s-version-225140 minikube.k8s.io/updated_at=2023_10_31T00_18_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.144297  248084 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:59.352655  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.464268  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.069316  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.569382  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.481532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:02.978563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:01.069124  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:01.569535  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.069209  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.569292  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.069280  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.569469  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.069050  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.569082  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.068795  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.569625  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.479444  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:07.980592  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:09.982873  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:06.069318  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:06.569043  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.069599  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.569098  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.069690  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.569668  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.069735  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.569294  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.069080  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.569441  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.068991  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.569543  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.069495  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.568757  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.069012  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.569638  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.789009  248084 kubeadm.go:1081] duration metric: took 14.676828073s to wait for elevateKubeSystemPrivileges.
	I1031 00:19:13.789061  248084 kubeadm.go:406] StartCluster complete in 5m39.716410778s
	I1031 00:19:13.789090  248084 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.789209  248084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:19:13.791883  248084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.792204  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:19:13.792368  248084 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:19:13.792451  248084 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792457  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:19:13.792471  248084 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-225140"
	W1031 00:19:13.792480  248084 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:19:13.792485  248084 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792515  248084 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792531  248084 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:13.792534  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	W1031 00:19:13.792540  248084 addons.go:240] addon metrics-server should already be in state true
	I1031 00:19:13.792568  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.792516  248084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-225140"
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793021  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793104  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793147  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793254  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.811115  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I1031 00:19:13.811377  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I1031 00:19:13.811793  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.811913  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.812411  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812433  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812586  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812636  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812764  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.812833  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35585
	I1031 00:19:13.813035  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.813186  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.813284  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.813624  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.813649  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.813896  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.813938  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.813984  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.814742  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.814791  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.817328  248084 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-225140"
	W1031 00:19:13.817352  248084 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:19:13.817383  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.817651  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.817676  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.831410  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1031 00:19:13.832059  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.832665  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.832686  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.833071  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.833396  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.834672  248084 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-225140" context rescaled to 1 replicas
	I1031 00:19:13.834715  248084 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:19:13.837043  248084 out.go:177] * Verifying Kubernetes components...
	I1031 00:19:13.834927  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1031 00:19:13.835269  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.835504  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I1031 00:19:13.837823  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.838827  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:19:13.840427  248084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:19:13.838307  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.839305  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.842067  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.842200  248084 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:13.842220  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:19:13.842259  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.842518  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.843110  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.843159  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.843539  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.843577  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.844178  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.844488  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.846259  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.846704  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.848811  248084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:19:12.479334  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:14.484105  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:13.847143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.847192  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.850295  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.850300  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:19:13.850319  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:19:13.850341  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.850537  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.850712  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.851115  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.853651  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854192  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.854226  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854563  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.854758  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.854967  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.855112  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.862473  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I1031 00:19:13.862970  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.863496  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.863526  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.864026  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.864257  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.866270  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.866530  248084 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:13.866546  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:19:13.866565  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.870580  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.870992  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.871028  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.871142  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.871372  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.871542  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.871678  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:14.034938  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:14.040988  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:19:14.041016  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:19:14.061666  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:14.111727  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:19:14.111758  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:19:14.125610  248084 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.125707  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:19:14.165369  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:14.165397  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:19:14.193366  248084 node_ready.go:49] node "old-k8s-version-225140" has status "Ready":"True"
	I1031 00:19:14.193389  248084 node_ready.go:38] duration metric: took 67.750717ms waiting for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.193401  248084 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:14.207505  248084 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:14.276613  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:15.572065  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.537074399s)
	I1031 00:19:15.572136  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572152  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572177  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.510470973s)
	I1031 00:19:15.572219  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572238  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572336  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.446596481s)
	I1031 00:19:15.572363  248084 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1031 00:19:15.572603  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572621  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572632  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572642  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572697  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572711  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572757  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572778  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572756  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572908  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572910  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572970  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.573533  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.573554  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586186  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.586210  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.586507  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.586530  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586546  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.700772  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.424096792s)
	I1031 00:19:15.700835  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.700851  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701196  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701217  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701230  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.701242  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701531  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.701561  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701574  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701585  248084 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:15.703404  248084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:19:15.704856  248084 addons.go:502] enable addons completed in 1.91251063s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:19:16.980629  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:19.478989  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:16.278623  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:18.779192  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.978882  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.981260  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.276797  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.277531  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.277561  248084 pod_ready.go:81] duration metric: took 9.070020963s waiting for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.277575  248084 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283345  248084 pod_ready.go:92] pod "kube-proxy-v2pp4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.283367  248084 pod_ready.go:81] duration metric: took 5.78532ms waiting for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283374  248084 pod_ready.go:38] duration metric: took 9.089964646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:23.283394  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:19:23.283452  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:19:23.300275  248084 api_server.go:72] duration metric: took 9.465522842s to wait for apiserver process to appear ...
	I1031 00:19:23.300294  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:19:23.300308  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:19:23.309064  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:19:23.310485  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:19:23.310508  248084 api_server.go:131] duration metric: took 10.207384ms to wait for apiserver health ...
	I1031 00:19:23.310517  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:19:23.314181  248084 system_pods.go:59] 4 kube-system pods found
	I1031 00:19:23.314205  248084 system_pods.go:61] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.314210  248084 system_pods.go:61] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.314217  248084 system_pods.go:61] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.314224  248084 system_pods.go:61] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.314230  248084 system_pods.go:74] duration metric: took 3.706807ms to wait for pod list to return data ...
	I1031 00:19:23.314236  248084 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:19:23.316411  248084 default_sa.go:45] found service account: "default"
	I1031 00:19:23.316435  248084 default_sa.go:55] duration metric: took 2.192647ms for default service account to be created ...
	I1031 00:19:23.316443  248084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:19:23.320111  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.320137  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.320148  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.320159  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.320167  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.320190  248084 retry.go:31] will retry after 199.965979ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.524726  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.524754  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.524760  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.524766  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.524773  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.524788  248084 retry.go:31] will retry after 276.623866ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.807038  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.807066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.807072  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.807080  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.807087  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.807104  248084 retry.go:31] will retry after 316.245952ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.128239  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.128268  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.128277  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.128287  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.128297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.128326  248084 retry.go:31] will retry after 483.558456ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.616454  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.616486  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.616494  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.616505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.616514  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.616534  248084 retry.go:31] will retry after 700.807178ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:25.323617  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:25.323666  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:25.323675  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:25.323687  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:25.323697  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:25.323718  248084 retry.go:31] will retry after 768.27646ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:26.485923  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:28.978283  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:26.097257  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:26.097283  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:26.097288  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:26.097295  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:26.097302  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:26.097320  248084 retry.go:31] will retry after 1.004884505s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:27.108295  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:27.108330  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:27.108339  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:27.108350  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:27.108360  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:27.108380  248084 retry.go:31] will retry after 1.256932803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:28.369629  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:28.369668  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:28.369677  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:28.369688  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:28.369698  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:28.369722  248084 retry.go:31] will retry after 1.554545012s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:29.930268  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:29.930295  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:29.930314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:29.930322  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:29.930338  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:29.930358  248084 retry.go:31] will retry after 1.794325328s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:30.981402  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:33.478794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:31.729473  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:31.729511  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:31.729520  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:31.729531  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:31.729542  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:31.729563  248084 retry.go:31] will retry after 2.111450847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:33.846759  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:33.846787  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:33.846792  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:33.846801  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:33.846807  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:33.846824  248084 retry.go:31] will retry after 2.198886772s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:35.981890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:38.478284  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:36.050460  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:36.050491  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:36.050496  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:36.050505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:36.050512  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:36.050530  248084 retry.go:31] will retry after 3.361148685s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:39.417603  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:39.417633  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:39.417640  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:39.417651  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:39.417660  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:39.417680  248084 retry.go:31] will retry after 4.41093106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:40.978990  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.479103  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.834041  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:43.834083  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:43.834093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:43.834104  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:43.834115  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:43.834134  248084 retry.go:31] will retry after 5.294476287s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:45.482986  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:47.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.980183  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.133233  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:49.133264  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:49.133269  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:49.133276  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:49.133284  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:49.133300  248084 retry.go:31] will retry after 7.429511286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:51.980355  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:53.981222  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.480456  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:58.979640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.567247  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:56.567278  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:56.567284  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:56.567290  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:56.567297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:56.567314  248084 retry.go:31] will retry after 10.944177906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:01.477606  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:03.481220  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:05.979560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.984688  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.518274  248084 system_pods.go:86] 7 kube-system pods found
	I1031 00:20:07.518300  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:07.518306  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Pending
	I1031 00:20:07.518310  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Pending
	I1031 00:20:07.518314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:07.518318  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Pending
	I1031 00:20:07.518325  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:07.518331  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:07.518349  248084 retry.go:31] will retry after 8.381829497s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:10.485015  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:12.978647  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.479489  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:17.980834  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.906034  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:15.906066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:15.906074  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Pending
	I1031 00:20:15.906080  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:15.906087  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:15.906093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:15.906100  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:15.906109  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:15.906120  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:15.906138  248084 retry.go:31] will retry after 11.167332732s: missing components: etcd
	I1031 00:20:20.481147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:22.980858  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:24.982265  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:27.080224  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:27.080263  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:27.080272  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Running
	I1031 00:20:27.080279  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:27.080287  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:27.080294  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:27.080301  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:27.080318  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:27.080332  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:27.080343  248084 system_pods.go:126] duration metric: took 1m3.763892339s to wait for k8s-apps to be running ...
	I1031 00:20:27.080357  248084 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:20:27.080408  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:20:27.098039  248084 system_svc.go:56] duration metric: took 17.670849ms WaitForService to wait for kubelet.
	I1031 00:20:27.098075  248084 kubeadm.go:581] duration metric: took 1m13.263332949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:20:27.098105  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:20:27.101093  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:20:27.101126  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:20:27.101182  248084 node_conditions.go:105] duration metric: took 3.066191ms to run NodePressure ...
	I1031 00:20:27.101198  248084 start.go:228] waiting for startup goroutines ...
	I1031 00:20:27.101208  248084 start.go:233] waiting for cluster config update ...
	I1031 00:20:27.101222  248084 start.go:242] writing updated cluster config ...
	I1031 00:20:27.101586  248084 ssh_runner.go:195] Run: rm -f paused
	I1031 00:20:27.157211  248084 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1031 00:20:27.159327  248084 out.go:177] 
	W1031 00:20:27.160872  248084 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1031 00:20:27.163644  248084 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1031 00:20:27.165443  248084 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-225140" cluster and "default" namespace by default
	I1031 00:20:27.481582  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:29.978812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:32.478965  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:34.479052  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:36.486487  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:38.981098  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:41.478500  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:43.478933  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:45.978794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:47.978937  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:49.980825  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:52.479268  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:54.978422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:57.478476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:59.478602  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:01.478639  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:03.479969  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:05.978907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:08.478656  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:10.978877  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:12.981683  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:15.479094  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:17.978893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:20.479878  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:22.483287  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:24.978077  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:26.979122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:28.981476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:31.478577  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:33.479816  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:35.979787  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:37.981859  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:40.477762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:42.479382  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:44.479508  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:46.479851  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:48.482610  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:49.171002  248387 pod_ready.go:81] duration metric: took 4m0.000595541s waiting for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	E1031 00:21:49.171048  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:21:49.171063  248387 pod_ready.go:38] duration metric: took 4m2.795014386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:21:49.171097  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:21:49.171149  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:21:49.171248  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:21:49.226512  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.226543  248387 cri.go:89] found id: ""
	I1031 00:21:49.226555  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:21:49.226647  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.230993  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:21:49.231060  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:21:49.270646  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:49.270677  248387 cri.go:89] found id: ""
	I1031 00:21:49.270688  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:21:49.270760  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.275165  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:21:49.275225  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:21:49.317730  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:49.317757  248387 cri.go:89] found id: ""
	I1031 00:21:49.317768  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:21:49.317818  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.322362  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:21:49.322430  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:21:49.361430  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.361462  248387 cri.go:89] found id: ""
	I1031 00:21:49.361474  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:21:49.361535  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.365642  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:21:49.365713  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:21:49.409230  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:49.409258  248387 cri.go:89] found id: ""
	I1031 00:21:49.409269  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:21:49.409329  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.413540  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:21:49.413622  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:21:49.458477  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:49.458506  248387 cri.go:89] found id: ""
	I1031 00:21:49.458518  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:21:49.458586  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.462471  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:21:49.462540  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:21:49.498272  248387 cri.go:89] found id: ""
	I1031 00:21:49.498299  248387 logs.go:284] 0 containers: []
	W1031 00:21:49.498309  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:21:49.498316  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:21:49.498386  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:21:49.538677  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.538704  248387 cri.go:89] found id: ""
	I1031 00:21:49.538714  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:21:49.538776  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.544293  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:21:49.544318  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:21:49.719505  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:21:49.719542  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.770108  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:21:49.770146  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.826250  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:21:49.826289  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.864212  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:21:49.864244  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:21:50.278307  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:21:50.278348  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:21:50.332860  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:21:50.332894  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:21:50.413002  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413224  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413368  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413524  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.435703  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:21:50.435739  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:21:50.451836  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:21:50.451865  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:50.493883  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:21:50.493912  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:50.533935  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:21:50.533967  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:50.582053  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:21:50.582094  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:50.638988  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639021  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:21:50.639177  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:21:50.639191  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639201  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639213  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639219  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.639225  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639232  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:00.639748  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:22:00.663810  248387 api_server.go:72] duration metric: took 4m16.69659563s to wait for apiserver process to appear ...
	I1031 00:22:00.663846  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:22:00.663904  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:00.663980  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:00.705584  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:00.705611  248387 cri.go:89] found id: ""
	I1031 00:22:00.705620  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:00.705672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.710031  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:00.710113  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:00.747821  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:00.747850  248387 cri.go:89] found id: ""
	I1031 00:22:00.747861  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:00.747926  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.752647  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:00.752733  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:00.802165  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:00.802200  248387 cri.go:89] found id: ""
	I1031 00:22:00.802210  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:00.802274  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.807367  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:00.807451  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:00.846633  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:00.846661  248387 cri.go:89] found id: ""
	I1031 00:22:00.846670  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:00.846736  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.851197  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:00.851282  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:00.891522  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:00.891549  248387 cri.go:89] found id: ""
	I1031 00:22:00.891559  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:00.891624  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.896269  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:00.896369  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:00.937565  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:00.937594  248387 cri.go:89] found id: ""
	I1031 00:22:00.937606  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:00.937672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.942205  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:00.942287  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:00.984788  248387 cri.go:89] found id: ""
	I1031 00:22:00.984814  248387 logs.go:284] 0 containers: []
	W1031 00:22:00.984821  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:00.984827  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:00.984883  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:01.032572  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.032601  248387 cri.go:89] found id: ""
	I1031 00:22:01.032621  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:01.032685  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:01.037253  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:01.037280  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:01.096027  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:01.096065  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:01.166608  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166786  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166925  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.167075  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:01.188441  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:01.188473  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:01.238925  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:01.238961  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:01.278987  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:01.279024  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:01.340249  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:01.340284  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:01.381155  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:01.381191  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.421808  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:01.421842  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:01.817836  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:01.817877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:01.832590  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:01.832620  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:01.961348  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:01.961384  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:02.023997  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:02.024055  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:02.087279  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087321  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:02.087437  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:02.087460  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087476  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087485  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087495  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:02.087513  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087527  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:12.090012  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:22:12.096458  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:22:12.097833  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:22:12.097860  248387 api_server.go:131] duration metric: took 11.434005759s to wait for apiserver health ...
	I1031 00:22:12.097872  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:22:12.097901  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:12.098004  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:12.161098  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.161129  248387 cri.go:89] found id: ""
	I1031 00:22:12.161140  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:12.161199  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.166236  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:12.166325  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:12.208793  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:12.208815  248387 cri.go:89] found id: ""
	I1031 00:22:12.208824  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:12.208871  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.213722  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:12.213791  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:12.256006  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.256036  248387 cri.go:89] found id: ""
	I1031 00:22:12.256046  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:12.256116  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.260468  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:12.260546  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:12.305580  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.305608  248387 cri.go:89] found id: ""
	I1031 00:22:12.305618  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:12.305687  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.313321  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:12.313390  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:12.359900  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.359928  248387 cri.go:89] found id: ""
	I1031 00:22:12.359939  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:12.360003  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.364087  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:12.364171  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:12.403635  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.403660  248387 cri.go:89] found id: ""
	I1031 00:22:12.403675  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:12.403743  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.408014  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:12.408087  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:12.449718  248387 cri.go:89] found id: ""
	I1031 00:22:12.449741  248387 logs.go:284] 0 containers: []
	W1031 00:22:12.449748  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:12.449753  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:12.449802  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:12.490301  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.490330  248387 cri.go:89] found id: ""
	I1031 00:22:12.490340  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:12.490396  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.495061  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:12.495125  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.537124  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:12.537163  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.597600  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:12.597642  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.637344  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:12.637385  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:12.691076  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:12.691107  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:12.820546  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:12.820578  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.871913  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:12.871953  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.914661  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:12.914705  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.965771  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:12.965810  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:13.352819  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:13.352862  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:13.424722  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.424906  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425062  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425220  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.447363  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:13.447393  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:13.462468  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:13.462502  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:13.507930  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.507960  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:13.508045  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:13.508060  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508072  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508084  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508097  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.508107  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.508114  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:23.516544  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:22:23.516574  248387 system_pods.go:61] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.516579  248387 system_pods.go:61] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.516584  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.516588  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.516592  248387 system_pods.go:61] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.516597  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.516604  248387 system_pods.go:61] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.516613  248387 system_pods.go:61] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.516620  248387 system_pods.go:74] duration metric: took 11.418741675s to wait for pod list to return data ...
	I1031 00:22:23.516630  248387 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:22:23.520026  248387 default_sa.go:45] found service account: "default"
	I1031 00:22:23.520050  248387 default_sa.go:55] duration metric: took 3.413856ms for default service account to be created ...
	I1031 00:22:23.520058  248387 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:22:23.526672  248387 system_pods.go:86] 8 kube-system pods found
	I1031 00:22:23.526704  248387 system_pods.go:89] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.526712  248387 system_pods.go:89] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.526719  248387 system_pods.go:89] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.526729  248387 system_pods.go:89] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.526736  248387 system_pods.go:89] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.526753  248387 system_pods.go:89] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.526765  248387 system_pods.go:89] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.526776  248387 system_pods.go:89] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.526789  248387 system_pods.go:126] duration metric: took 6.724214ms to wait for k8s-apps to be running ...
	I1031 00:22:23.526801  248387 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:22:23.526862  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:22:23.546006  248387 system_svc.go:56] duration metric: took 19.183151ms WaitForService to wait for kubelet.
	I1031 00:22:23.546038  248387 kubeadm.go:581] duration metric: took 4m39.57883274s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:22:23.546066  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:22:23.550930  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:22:23.550975  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:22:23.551004  248387 node_conditions.go:105] duration metric: took 4.930974ms to run NodePressure ...
	I1031 00:22:23.551041  248387 start.go:228] waiting for startup goroutines ...
	I1031 00:22:23.551053  248387 start.go:233] waiting for cluster config update ...
	I1031 00:22:23.551064  248387 start.go:242] writing updated cluster config ...
	I1031 00:22:23.551346  248387 ssh_runner.go:195] Run: rm -f paused
	I1031 00:22:23.603812  248387 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:22:23.605925  248387 out.go:177] * Done! kubectl is now configured to use "no-preload-640155" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 00:12:05 UTC, ends at Tue 2023-10-31 00:31:25 UTC. --
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.356098508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712285356000205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=265c8c6b-ba34-4c12-a2c7-5b27984f688f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.356759774Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6be64f44-f483-4efe-9af0-469c64e76244 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.356807600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6be64f44-f483-4efe-9af0-469c64e76244 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.356968399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07,PodSandboxId:812c17a71bef27cd1a4b5e6e267981abad85c7899ec1142462a56f979fc80069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698711467881960246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf2b5d0-1773-4ee6-882d-daff300f9d80,},Annotations:map[string]string{io.kubernetes.container.hash: 8b11db42,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373,PodSandboxId:df3a07191232d109244e31a29145f55fc6065949a6f00882fd5d0a8a1494b444,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698711467605206305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc67cf4-4a59-42bf-a6ca-b2be409f5077,},Annotations:map[string]string{io.kubernetes.container.hash: 3be23bff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e,PodSandboxId:7293c197a03b3201abc827276f5ea75d4abe60534d11435b0fed383dd4ea9771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698711467061316230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gp6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7086342-a1ed-42b3-819a-ad7d8211ad17,},Annotations:map[string]string{io.kubernetes.container.hash: 5ee357d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c,PodSandboxId:d273e52b8919ce1f86ecb6ffc378b1a2966c7436139bbe047ea9e12bd95c38b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698711443858345748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
78de84bf9e4cea78d031c625cd991114,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3,PodSandboxId:c51a7b199e872c10c757926de1fbcc7f35b35879896087c54e04905a9b99fff3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698711443768156345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d006c17ee88c57b42e8328304b6f774,},Annotations:map[
string]string{io.kubernetes.container.hash: 3cd2a05e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb,PodSandboxId:9912485c08eacbf8a42dd77186c2a7efc211ed49abfd27f8d71f3eb36b66e3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698711443691928392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ea8799ec6c67cdc310b5507b
f1e01d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850,PodSandboxId:cfb58aefd8cc0020511742f06ffe0d99edd92ea63fed0214e636944b75b4beb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698711443374523373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c464abba4e6ceb32924cfebc2fc059e7,},An
notations:map[string]string{io.kubernetes.container.hash: 362a7add,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6be64f44-f483-4efe-9af0-469c64e76244 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.397805249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=012ff09a-f060-48e5-a409-6946712a79a1 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.397858682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=012ff09a-f060-48e5-a409-6946712a79a1 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.399402365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=43f24957-8d4c-4e8f-bf9c-9d723e3d71a2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.399708607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712285399698139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=43f24957-8d4c-4e8f-bf9c-9d723e3d71a2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.400735232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a712916-df9a-4955-9630-7c0e26341bbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.400781542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a712916-df9a-4955-9630-7c0e26341bbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.401668291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07,PodSandboxId:812c17a71bef27cd1a4b5e6e267981abad85c7899ec1142462a56f979fc80069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698711467881960246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf2b5d0-1773-4ee6-882d-daff300f9d80,},Annotations:map[string]string{io.kubernetes.container.hash: 8b11db42,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373,PodSandboxId:df3a07191232d109244e31a29145f55fc6065949a6f00882fd5d0a8a1494b444,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698711467605206305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc67cf4-4a59-42bf-a6ca-b2be409f5077,},Annotations:map[string]string{io.kubernetes.container.hash: 3be23bff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e,PodSandboxId:7293c197a03b3201abc827276f5ea75d4abe60534d11435b0fed383dd4ea9771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698711467061316230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gp6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7086342-a1ed-42b3-819a-ad7d8211ad17,},Annotations:map[string]string{io.kubernetes.container.hash: 5ee357d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c,PodSandboxId:d273e52b8919ce1f86ecb6ffc378b1a2966c7436139bbe047ea9e12bd95c38b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698711443858345748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
78de84bf9e4cea78d031c625cd991114,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3,PodSandboxId:c51a7b199e872c10c757926de1fbcc7f35b35879896087c54e04905a9b99fff3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698711443768156345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d006c17ee88c57b42e8328304b6f774,},Annotations:map[
string]string{io.kubernetes.container.hash: 3cd2a05e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb,PodSandboxId:9912485c08eacbf8a42dd77186c2a7efc211ed49abfd27f8d71f3eb36b66e3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698711443691928392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ea8799ec6c67cdc310b5507b
f1e01d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850,PodSandboxId:cfb58aefd8cc0020511742f06ffe0d99edd92ea63fed0214e636944b75b4beb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698711443374523373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c464abba4e6ceb32924cfebc2fc059e7,},An
notations:map[string]string{io.kubernetes.container.hash: 362a7add,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a712916-df9a-4955-9630-7c0e26341bbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.446707642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b837b3c8-9f96-448e-b0f4-2507ae06bd34 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.446771113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b837b3c8-9f96-448e-b0f4-2507ae06bd34 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.448247786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cd723e53-1502-4174-9c6f-865a7a7a2c40 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.448615289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712285448601786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=cd723e53-1502-4174-9c6f-865a7a7a2c40 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.449931745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d8b92ed5-22ad-449a-b31e-b062f54d9f83 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.449982963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d8b92ed5-22ad-449a-b31e-b062f54d9f83 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.450230776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07,PodSandboxId:812c17a71bef27cd1a4b5e6e267981abad85c7899ec1142462a56f979fc80069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698711467881960246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf2b5d0-1773-4ee6-882d-daff300f9d80,},Annotations:map[string]string{io.kubernetes.container.hash: 8b11db42,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373,PodSandboxId:df3a07191232d109244e31a29145f55fc6065949a6f00882fd5d0a8a1494b444,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698711467605206305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc67cf4-4a59-42bf-a6ca-b2be409f5077,},Annotations:map[string]string{io.kubernetes.container.hash: 3be23bff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e,PodSandboxId:7293c197a03b3201abc827276f5ea75d4abe60534d11435b0fed383dd4ea9771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698711467061316230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gp6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7086342-a1ed-42b3-819a-ad7d8211ad17,},Annotations:map[string]string{io.kubernetes.container.hash: 5ee357d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c,PodSandboxId:d273e52b8919ce1f86ecb6ffc378b1a2966c7436139bbe047ea9e12bd95c38b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698711443858345748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
78de84bf9e4cea78d031c625cd991114,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3,PodSandboxId:c51a7b199e872c10c757926de1fbcc7f35b35879896087c54e04905a9b99fff3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698711443768156345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d006c17ee88c57b42e8328304b6f774,},Annotations:map[
string]string{io.kubernetes.container.hash: 3cd2a05e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb,PodSandboxId:9912485c08eacbf8a42dd77186c2a7efc211ed49abfd27f8d71f3eb36b66e3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698711443691928392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ea8799ec6c67cdc310b5507b
f1e01d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850,PodSandboxId:cfb58aefd8cc0020511742f06ffe0d99edd92ea63fed0214e636944b75b4beb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698711443374523373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c464abba4e6ceb32924cfebc2fc059e7,},An
notations:map[string]string{io.kubernetes.container.hash: 362a7add,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d8b92ed5-22ad-449a-b31e-b062f54d9f83 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.500104776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9159871e-3696-4c74-8106-d94318e0f559 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.500214477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9159871e-3696-4c74-8106-d94318e0f559 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.501722358Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4fc3c2fc-d15a-45c6-873b-c69d817413bd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.502237953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712285502218295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=4fc3c2fc-d15a-45c6-873b-c69d817413bd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.502941936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c81a09cb-2fe5-4ba4-81d6-25712264c71f name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.503005839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c81a09cb-2fe5-4ba4-81d6-25712264c71f name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:25 no-preload-640155 crio[712]: time="2023-10-31 00:31:25.503323993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07,PodSandboxId:812c17a71bef27cd1a4b5e6e267981abad85c7899ec1142462a56f979fc80069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698711467881960246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf2b5d0-1773-4ee6-882d-daff300f9d80,},Annotations:map[string]string{io.kubernetes.container.hash: 8b11db42,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373,PodSandboxId:df3a07191232d109244e31a29145f55fc6065949a6f00882fd5d0a8a1494b444,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698711467605206305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc67cf4-4a59-42bf-a6ca-b2be409f5077,},Annotations:map[string]string{io.kubernetes.container.hash: 3be23bff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e,PodSandboxId:7293c197a03b3201abc827276f5ea75d4abe60534d11435b0fed383dd4ea9771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698711467061316230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gp6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7086342-a1ed-42b3-819a-ad7d8211ad17,},Annotations:map[string]string{io.kubernetes.container.hash: 5ee357d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c,PodSandboxId:d273e52b8919ce1f86ecb6ffc378b1a2966c7436139bbe047ea9e12bd95c38b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698711443858345748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
78de84bf9e4cea78d031c625cd991114,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3,PodSandboxId:c51a7b199e872c10c757926de1fbcc7f35b35879896087c54e04905a9b99fff3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698711443768156345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d006c17ee88c57b42e8328304b6f774,},Annotations:map[
string]string{io.kubernetes.container.hash: 3cd2a05e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb,PodSandboxId:9912485c08eacbf8a42dd77186c2a7efc211ed49abfd27f8d71f3eb36b66e3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698711443691928392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ea8799ec6c67cdc310b5507b
f1e01d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850,PodSandboxId:cfb58aefd8cc0020511742f06ffe0d99edd92ea63fed0214e636944b75b4beb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698711443374523373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c464abba4e6ceb32924cfebc2fc059e7,},An
notations:map[string]string{io.kubernetes.container.hash: 362a7add,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c81a09cb-2fe5-4ba4-81d6-25712264c71f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd92760f1aa1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   812c17a71bef2       storage-provisioner
	744ec7366f8a7       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   13 minutes ago      Running             kube-proxy                0                   df3a07191232d       kube-proxy-pkjsl
	12e3e0eb3fa0f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   7293c197a03b3       coredns-5dd5756b68-gp6pj
	6fe9c6ea686cf       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   14 minutes ago      Running             kube-scheduler            2                   d273e52b8919c       kube-scheduler-no-preload-640155
	07e6ccb405f57       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   c51a7b199e872       etcd-no-preload-640155
	d106e63a6e40b       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   14 minutes ago      Running             kube-controller-manager   2                   9912485c08eac       kube-controller-manager-no-preload-640155
	d99088bf7c1d1       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   14 minutes ago      Running             kube-apiserver            2                   cfb58aefd8cc0       kube-apiserver-no-preload-640155
	
	* 
	* ==> coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38278 - 17632 "HINFO IN 7134557370839004967.5240026344512166091. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009358349s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-640155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-640155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=no-preload-640155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T00_17_31_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 00:17:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-640155
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 00:31:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:28:02 +0000   Tue, 31 Oct 2023 00:17:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:28:02 +0000   Tue, 31 Oct 2023 00:17:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:28:02 +0000   Tue, 31 Oct 2023 00:17:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:28:02 +0000   Tue, 31 Oct 2023 00:17:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.168
	  Hostname:    no-preload-640155
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 84caacece5d34fe39253fe3dd5ba85a5
	  System UUID:                84caacec-e5d3-4fe3-9253-fe3dd5ba85a5
	  Boot ID:                    1aa16f0e-0a43-4159-a950-eda4d1a7a374
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gp6pj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-640155                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-640155             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-640155    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-pkjsl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-640155             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-d2xg4              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-640155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-640155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-640155 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node no-preload-640155 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node no-preload-640155 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-640155 event: Registered Node no-preload-640155 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct31 00:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068936] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct31 00:12] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.504771] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156790] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.454130] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.335244] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.118128] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.159891] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.117833] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.216188] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +31.156647] systemd-fstab-generator[1269]: Ignoring "noauto" for root device
	[Oct31 00:13] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 00:17] systemd-fstab-generator[3874]: Ignoring "noauto" for root device
	[  +9.278826] systemd-fstab-generator[4215]: Ignoring "noauto" for root device
	[ +14.932487] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] <==
	* {"level":"info","ts":"2023-10-31T00:17:25.189517Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.168:2380"}
	{"level":"info","ts":"2023-10-31T00:17:25.193205Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.168:2380"}
	{"level":"info","ts":"2023-10-31T00:17:25.198383Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"81aa8b2870c4e31b","initial-advertise-peer-urls":["https://192.168.61.168:2380"],"listen-peer-urls":["https://192.168.61.168:2380"],"advertise-client-urls":["https://192.168.61.168:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.168:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T00:17:25.199612Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T00:17:26.012733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T00:17:26.012823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T00:17:26.012854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b received MsgPreVoteResp from 81aa8b2870c4e31b at term 1"}
	{"level":"info","ts":"2023-10-31T00:17:26.012876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T00:17:26.012882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b received MsgVoteResp from 81aa8b2870c4e31b at term 2"}
	{"level":"info","ts":"2023-10-31T00:17:26.012891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b became leader at term 2"}
	{"level":"info","ts":"2023-10-31T00:17:26.012899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81aa8b2870c4e31b elected leader 81aa8b2870c4e31b at term 2"}
	{"level":"info","ts":"2023-10-31T00:17:26.014714Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:17:26.014987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"81aa8b2870c4e31b","local-member-attributes":"{Name:no-preload-640155 ClientURLs:[https://192.168.61.168:2379]}","request-path":"/0/members/81aa8b2870c4e31b/attributes","cluster-id":"a8447026812d6081","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T00:17:26.015228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:17:26.015947Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a8447026812d6081","local-member-id":"81aa8b2870c4e31b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:17:26.016177Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:17:26.016231Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:17:26.016916Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T00:17:26.017135Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:17:26.018199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.168:2379"}
	{"level":"info","ts":"2023-10-31T00:17:26.030085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T00:17:26.030135Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T00:27:26.051658Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":680}
	{"level":"info","ts":"2023-10-31T00:27:26.055675Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":680,"took":"2.921832ms","hash":321229060}
	{"level":"info","ts":"2023-10-31T00:27:26.05581Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":321229060,"revision":680,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  00:31:25 up 19 min,  0 users,  load average: 0.23, 0.27, 0.21
	Linux no-preload-640155 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] <==
	* I1031 00:27:27.839120       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:27:28.838952       1 handler_proxy.go:93] no RequestInfo found in the context
	W1031 00:27:28.839126       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:27:28.839518       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:27:28.839560       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1031 00:27:28.839679       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:27:28.841188       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:28:27.701323       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:28:28.840438       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:28:28.840606       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:28:28.840660       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:28:28.841764       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:28:28.841871       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:28:28.841911       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:29:27.701798       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1031 00:30:27.701691       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:30:28.841152       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:30:28.841347       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:30:28.841413       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:30:28.842235       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:30:28.842420       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:30:28.842465       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] <==
	* I1031 00:25:44.076085       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:26:13.564843       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:26:14.084935       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:26:43.572688       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:26:44.095339       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:27:13.577776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:27:14.105228       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:27:43.584244       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:27:44.117360       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:28:13.590776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:28:14.126659       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:28:43.598116       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:28:44.135624       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:28:48.327697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="247.497µs"
	I1031 00:28:59.324609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="692.849µs"
	E1031 00:29:13.604728       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:29:14.145667       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:29:43.610698       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:29:44.159164       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:30:13.616483       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:30:14.169112       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:30:43.623944       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:30:44.179903       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:31:13.630325       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:31:14.189866       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] <==
	* I1031 00:17:47.834527       1 server_others.go:69] "Using iptables proxy"
	I1031 00:17:47.855493       1 node.go:141] Successfully retrieved node IP: 192.168.61.168
	I1031 00:17:47.935439       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 00:17:47.935514       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 00:17:47.940593       1 server_others.go:152] "Using iptables Proxier"
	I1031 00:17:47.940682       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 00:17:47.940858       1 server.go:846] "Version info" version="v1.28.3"
	I1031 00:17:47.940871       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:17:47.942800       1 config.go:188] "Starting service config controller"
	I1031 00:17:47.942899       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 00:17:47.942960       1 config.go:97] "Starting endpoint slice config controller"
	I1031 00:17:47.942965       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 00:17:47.944798       1 config.go:315] "Starting node config controller"
	I1031 00:17:47.944834       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 00:17:48.046493       1 shared_informer.go:318] Caches are synced for node config
	I1031 00:17:48.046556       1 shared_informer.go:318] Caches are synced for service config
	I1031 00:17:48.046580       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] <==
	* W1031 00:17:28.673793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 00:17:28.674084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 00:17:28.733245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1031 00:17:28.733326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1031 00:17:28.790715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 00:17:28.790772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 00:17:28.807938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 00:17:28.808088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1031 00:17:28.855280       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 00:17:28.855405       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 00:17:28.942532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:17:28.942612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1031 00:17:29.003128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 00:17:29.003184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1031 00:17:29.039315       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 00:17:29.039375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 00:17:29.073222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 00:17:29.073285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1031 00:17:29.096862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 00:17:29.096926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 00:17:29.123347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 00:17:29.123442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 00:17:29.155633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 00:17:29.155727       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1031 00:17:31.732149       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 00:12:05 UTC, ends at Tue 2023-10-31 00:31:26 UTC. --
	Oct 31 00:28:33 no-preload-640155 kubelet[4222]: E1031 00:28:33.322447    4222 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 31 00:28:33 no-preload-640155 kubelet[4222]: E1031 00:28:33.322493    4222 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 31 00:28:33 no-preload-640155 kubelet[4222]: E1031 00:28:33.322731    4222 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pbpvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-d2xg4_kube-system(b16ae9e6-6deb-485f-af5c-35cafada4a39): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:28:33 no-preload-640155 kubelet[4222]: E1031 00:28:33.322773    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:28:48 no-preload-640155 kubelet[4222]: E1031 00:28:48.304151    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:28:59 no-preload-640155 kubelet[4222]: E1031 00:28:59.303510    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:29:11 no-preload-640155 kubelet[4222]: E1031 00:29:11.303296    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:29:24 no-preload-640155 kubelet[4222]: E1031 00:29:24.303149    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:29:31 no-preload-640155 kubelet[4222]: E1031 00:29:31.386230    4222 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:29:31 no-preload-640155 kubelet[4222]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:29:31 no-preload-640155 kubelet[4222]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:29:31 no-preload-640155 kubelet[4222]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:29:36 no-preload-640155 kubelet[4222]: E1031 00:29:36.302931    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:29:51 no-preload-640155 kubelet[4222]: E1031 00:29:51.303849    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:05 no-preload-640155 kubelet[4222]: E1031 00:30:05.303886    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:20 no-preload-640155 kubelet[4222]: E1031 00:30:20.304231    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:31 no-preload-640155 kubelet[4222]: E1031 00:30:31.386677    4222 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:30:31 no-preload-640155 kubelet[4222]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:30:31 no-preload-640155 kubelet[4222]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:30:31 no-preload-640155 kubelet[4222]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:30:34 no-preload-640155 kubelet[4222]: E1031 00:30:34.303136    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:47 no-preload-640155 kubelet[4222]: E1031 00:30:47.304222    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:58 no-preload-640155 kubelet[4222]: E1031 00:30:58.303755    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:31:09 no-preload-640155 kubelet[4222]: E1031 00:31:09.303338    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:31:21 no-preload-640155 kubelet[4222]: E1031 00:31:21.303276    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	
	* 
	* ==> storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] <==
	* I1031 00:17:48.022794       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 00:17:48.057909       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 00:17:48.058394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 00:17:48.072953       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 00:17:48.073446       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-640155_048d4a56-83f9-4317-b90e-c2bc17b7da39!
	I1031 00:17:48.079689       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa96ec76-f883-44a6-a949-cebaf07baf8e", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-640155_048d4a56-83f9-4317-b90e-c2bc17b7da39 became leader
	I1031 00:17:48.175433       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-640155_048d4a56-83f9-4317-b90e-c2bc17b7da39!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-640155 -n no-preload-640155
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-640155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-d2xg4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-640155 describe pod metrics-server-57f55c9bc5-d2xg4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-640155 describe pod metrics-server-57f55c9bc5-d2xg4: exit status 1 (76.871325ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-d2xg4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-640155 describe pod metrics-server-57f55c9bc5-d2xg4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (467.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1031 00:27:08.185108  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-078843 -n embed-certs-078843
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-31 00:34:13.835175915 +0000 UTC m=+5556.085192836
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-078843 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-078843 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.545µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-078843 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-078843 logs -n 25
E1031 00:34:14.583078  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-078843 logs -n 25: (1.571630893s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | status kubelet --all --full                          |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | cat kubelet --no-pager                               |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo journalctl                       | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | -xeu kubelet --all --full                            |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo cat                              | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo cat                              | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC |                     |
	|         | status docker --all --full                           |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | cat docker --no-pager                                |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo cat                              | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | /etc/docker/daemon.json                              |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo docker                           | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC |                     |
	|         | system info                                          |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | cat cri-docker --no-pager                            |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo cat                              | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo cat                              | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo                                  | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:33 UTC | 31 Oct 23 00:33 UTC |
	|         | cri-dockerd --version                                |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC |                     |
	|         | status containerd --all --full                       |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	|         | cat containerd --no-pager                            |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo cat                              | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo cat                              | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	|         | /etc/containerd/config.toml                          |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo containerd                       | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	|         | config dump                                          |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	|         | status crio --all --full                             |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo systemctl                        | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	|         | cat crio --no-pager                                  |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo find                             | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |                |                     |                     |
	| ssh     | -p auto-740627 sudo crio                             | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	|         | config                                               |                |         |                |                     |                     |
	| delete  | -p auto-740627                                       | auto-740627    | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC | 31 Oct 23 00:34 UTC |
	| start   | -p kindnet-740627                                    | kindnet-740627 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:34 UTC |                     |
	|         | --memory=3072                                        |                |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |                |                     |                     |
	|         | --cni=kindnet --driver=kvm2                          |                |         |                |                     |                     |
	|         | --container-runtime=crio                             |                |         |                |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:34:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:34:03.370520  256701 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:34:03.370820  256701 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:34:03.370831  256701 out.go:309] Setting ErrFile to fd 2...
	I1031 00:34:03.370836  256701 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:34:03.371104  256701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:34:03.371768  256701 out.go:303] Setting JSON to false
	I1031 00:34:03.372879  256701 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29795,"bootTime":1698682648,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:34:03.372981  256701 start.go:138] virtualization: kvm guest
	I1031 00:34:03.375549  256701 out.go:177] * [kindnet-740627] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:34:03.377593  256701 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:34:03.379102  256701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:34:03.377566  256701 notify.go:220] Checking for updates...
	I1031 00:34:03.380926  256701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:34:03.382540  256701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:34:03.384039  256701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:34:03.385499  256701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:34:03.387539  256701 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:34:03.387639  256701 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:34:03.387752  256701 config.go:182] Loaded profile config "newest-cni-558362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:34:03.387836  256701 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:34:03.425355  256701 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 00:34:03.426821  256701 start.go:298] selected driver: kvm2
	I1031 00:34:03.426837  256701 start.go:902] validating driver "kvm2" against <nil>
	I1031 00:34:03.426847  256701 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:34:03.427544  256701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:34:03.427639  256701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:34:03.442417  256701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:34:03.442458  256701 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 00:34:03.442673  256701 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 00:34:03.442732  256701 cni.go:84] Creating CNI manager for "kindnet"
	I1031 00:34:03.442749  256701 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 00:34:03.442758  256701 start_flags.go:323] config:
	{Name:kindnet-740627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kindnet-740627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:34:03.442882  256701 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:34:03.445777  256701 out.go:177] * Starting control plane node kindnet-740627 in cluster kindnet-740627
	I1031 00:34:03.447138  256701 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:34:03.447178  256701 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:34:03.447188  256701 cache.go:56] Caching tarball of preloaded images
	I1031 00:34:03.447278  256701 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:34:03.447292  256701 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:34:03.447405  256701 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/kindnet-740627/config.json ...
	I1031 00:34:03.447436  256701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/kindnet-740627/config.json: {Name:mkbd8f9373730df6373d57e93a4f1c71f174df7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:34:03.447597  256701 start.go:365] acquiring machines lock for kindnet-740627: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:34:03.447630  256701 start.go:369] acquired machines lock for "kindnet-740627" in 18.132µs
	I1031 00:34:03.447653  256701 start.go:93] Provisioning new machine with config: &{Name:kindnet-740627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.3 ClusterName:kindnet-740627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:34:03.447776  256701 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 00:33:59.504304  255091 api_server.go:166] Checking apiserver status ...
	I1031 00:33:59.504367  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:33:59.522117  255091 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:34:00.004696  255091 api_server.go:166] Checking apiserver status ...
	I1031 00:34:00.004777  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:34:00.021717  255091 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:34:00.504045  255091 api_server.go:166] Checking apiserver status ...
	I1031 00:34:00.504103  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:34:00.518119  255091 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:34:01.004717  255091 api_server.go:166] Checking apiserver status ...
	I1031 00:34:01.004797  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:34:01.018491  255091 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:34:01.479710  255091 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:34:01.479735  255091 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:34:01.479749  255091 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:34:01.479803  255091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:34:01.525121  255091 cri.go:89] found id: ""
	I1031 00:34:01.525184  255091 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:34:01.543468  255091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:34:01.555515  255091 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:34:01.555588  255091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:34:01.566495  255091 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:34:01.566532  255091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:34:01.706946  255091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:34:02.638484  255091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:34:02.876624  255091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:34:02.956830  255091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:34:03.056767  255091 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:34:03.056852  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:34:03.074510  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:34:03.593411  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:34:04.094193  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:34:03.449469  256701 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1031 00:34:03.449620  256701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:03.449677  256701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:03.463199  256701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I1031 00:34:03.463622  256701 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:03.464145  256701 main.go:141] libmachine: Using API Version  1
	I1031 00:34:03.464166  256701 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:03.464548  256701 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:03.464727  256701 main.go:141] libmachine: (kindnet-740627) Calling .GetMachineName
	I1031 00:34:03.464877  256701 main.go:141] libmachine: (kindnet-740627) Calling .DriverName
	I1031 00:34:03.465036  256701 start.go:159] libmachine.API.Create for "kindnet-740627" (driver="kvm2")
	I1031 00:34:03.465088  256701 client.go:168] LocalClient.Create starting
	I1031 00:34:03.465123  256701 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem
	I1031 00:34:03.465164  256701 main.go:141] libmachine: Decoding PEM data...
	I1031 00:34:03.465190  256701 main.go:141] libmachine: Parsing certificate...
	I1031 00:34:03.465270  256701 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem
	I1031 00:34:03.465299  256701 main.go:141] libmachine: Decoding PEM data...
	I1031 00:34:03.465326  256701 main.go:141] libmachine: Parsing certificate...
	I1031 00:34:03.465344  256701 main.go:141] libmachine: Running pre-create checks...
	I1031 00:34:03.465353  256701 main.go:141] libmachine: (kindnet-740627) Calling .PreCreateCheck
	I1031 00:34:03.465707  256701 main.go:141] libmachine: (kindnet-740627) Calling .GetConfigRaw
	I1031 00:34:03.466089  256701 main.go:141] libmachine: Creating machine...
	I1031 00:34:03.466108  256701 main.go:141] libmachine: (kindnet-740627) Calling .Create
	I1031 00:34:03.466228  256701 main.go:141] libmachine: (kindnet-740627) Creating KVM machine...
	I1031 00:34:03.467538  256701 main.go:141] libmachine: (kindnet-740627) DBG | found existing default KVM network
	I1031 00:34:03.468804  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:03.468567  256725 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:97:44:a3} reservation:<nil>}
	I1031 00:34:03.469479  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:03.469393  256725 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:48:10} reservation:<nil>}
	I1031 00:34:03.470514  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:03.470430  256725 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027cfc0}
	I1031 00:34:03.475996  256701 main.go:141] libmachine: (kindnet-740627) DBG | trying to create private KVM network mk-kindnet-740627 192.168.61.0/24...
	I1031 00:34:03.551346  256701 main.go:141] libmachine: (kindnet-740627) DBG | private KVM network mk-kindnet-740627 192.168.61.0/24 created
	I1031 00:34:03.551375  256701 main.go:141] libmachine: (kindnet-740627) Setting up store path in /home/jenkins/minikube-integration/17527-208817/.minikube/machines/kindnet-740627 ...
	I1031 00:34:03.551392  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:03.551326  256725 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:34:03.551405  256701 main.go:141] libmachine: (kindnet-740627) Building disk image from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso
	I1031 00:34:03.551460  256701 main.go:141] libmachine: (kindnet-740627) Downloading /home/jenkins/minikube-integration/17527-208817/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso...
	I1031 00:34:03.803487  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:03.803333  256725 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/kindnet-740627/id_rsa...
	I1031 00:34:03.919429  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:03.919276  256725 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/kindnet-740627/kindnet-740627.rawdisk...
	I1031 00:34:03.919468  256701 main.go:141] libmachine: (kindnet-740627) DBG | Writing magic tar header
	I1031 00:34:03.919493  256701 main.go:141] libmachine: (kindnet-740627) DBG | Writing SSH key tar header
	I1031 00:34:03.919527  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:03.919385  256725 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/kindnet-740627 ...
	I1031 00:34:03.919553  256701 main.go:141] libmachine: (kindnet-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/kindnet-740627
	I1031 00:34:03.919565  256701 main.go:141] libmachine: (kindnet-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines
	I1031 00:34:03.919579  256701 main.go:141] libmachine: (kindnet-740627) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/kindnet-740627 (perms=drwx------)
	I1031 00:34:03.919595  256701 main.go:141] libmachine: (kindnet-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:34:03.919612  256701 main.go:141] libmachine: (kindnet-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817
	I1031 00:34:03.919628  256701 main.go:141] libmachine: (kindnet-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 00:34:03.919645  256701 main.go:141] libmachine: (kindnet-740627) DBG | Checking permissions on dir: /home/jenkins
	I1031 00:34:03.919658  256701 main.go:141] libmachine: (kindnet-740627) DBG | Checking permissions on dir: /home
	I1031 00:34:03.919675  256701 main.go:141] libmachine: (kindnet-740627) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines (perms=drwxr-xr-x)
	I1031 00:34:03.919695  256701 main.go:141] libmachine: (kindnet-740627) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube (perms=drwxr-xr-x)
	I1031 00:34:03.919710  256701 main.go:141] libmachine: (kindnet-740627) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817 (perms=drwxrwxr-x)
	I1031 00:34:03.919723  256701 main.go:141] libmachine: (kindnet-740627) DBG | Skipping /home - not owner
	I1031 00:34:03.919815  256701 main.go:141] libmachine: (kindnet-740627) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 00:34:03.919850  256701 main.go:141] libmachine: (kindnet-740627) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 00:34:03.919947  256701 main.go:141] libmachine: (kindnet-740627) Creating domain...
	I1031 00:34:03.921357  256701 main.go:141] libmachine: (kindnet-740627) define libvirt domain using xml: 
	I1031 00:34:03.921381  256701 main.go:141] libmachine: (kindnet-740627) <domain type='kvm'>
	I1031 00:34:03.921392  256701 main.go:141] libmachine: (kindnet-740627)   <name>kindnet-740627</name>
	I1031 00:34:03.921401  256701 main.go:141] libmachine: (kindnet-740627)   <memory unit='MiB'>3072</memory>
	I1031 00:34:03.921415  256701 main.go:141] libmachine: (kindnet-740627)   <vcpu>2</vcpu>
	I1031 00:34:03.921425  256701 main.go:141] libmachine: (kindnet-740627)   <features>
	I1031 00:34:03.921435  256701 main.go:141] libmachine: (kindnet-740627)     <acpi/>
	I1031 00:34:03.921446  256701 main.go:141] libmachine: (kindnet-740627)     <apic/>
	I1031 00:34:03.921458  256701 main.go:141] libmachine: (kindnet-740627)     <pae/>
	I1031 00:34:03.921469  256701 main.go:141] libmachine: (kindnet-740627)     
	I1031 00:34:03.921483  256701 main.go:141] libmachine: (kindnet-740627)   </features>
	I1031 00:34:03.921493  256701 main.go:141] libmachine: (kindnet-740627)   <cpu mode='host-passthrough'>
	I1031 00:34:03.921506  256701 main.go:141] libmachine: (kindnet-740627)   
	I1031 00:34:03.921528  256701 main.go:141] libmachine: (kindnet-740627)   </cpu>
	I1031 00:34:03.921553  256701 main.go:141] libmachine: (kindnet-740627)   <os>
	I1031 00:34:03.921568  256701 main.go:141] libmachine: (kindnet-740627)     <type>hvm</type>
	I1031 00:34:03.921584  256701 main.go:141] libmachine: (kindnet-740627)     <boot dev='cdrom'/>
	I1031 00:34:03.921595  256701 main.go:141] libmachine: (kindnet-740627)     <boot dev='hd'/>
	I1031 00:34:03.921607  256701 main.go:141] libmachine: (kindnet-740627)     <bootmenu enable='no'/>
	I1031 00:34:03.921618  256701 main.go:141] libmachine: (kindnet-740627)   </os>
	I1031 00:34:03.921629  256701 main.go:141] libmachine: (kindnet-740627)   <devices>
	I1031 00:34:03.921640  256701 main.go:141] libmachine: (kindnet-740627)     <disk type='file' device='cdrom'>
	I1031 00:34:03.921661  256701 main.go:141] libmachine: (kindnet-740627)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/kindnet-740627/boot2docker.iso'/>
	I1031 00:34:03.921679  256701 main.go:141] libmachine: (kindnet-740627)       <target dev='hdc' bus='scsi'/>
	I1031 00:34:03.921692  256701 main.go:141] libmachine: (kindnet-740627)       <readonly/>
	I1031 00:34:03.921702  256701 main.go:141] libmachine: (kindnet-740627)     </disk>
	I1031 00:34:03.921727  256701 main.go:141] libmachine: (kindnet-740627)     <disk type='file' device='disk'>
	I1031 00:34:03.921754  256701 main.go:141] libmachine: (kindnet-740627)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 00:34:03.921771  256701 main.go:141] libmachine: (kindnet-740627)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/kindnet-740627/kindnet-740627.rawdisk'/>
	I1031 00:34:03.921784  256701 main.go:141] libmachine: (kindnet-740627)       <target dev='hda' bus='virtio'/>
	I1031 00:34:03.921797  256701 main.go:141] libmachine: (kindnet-740627)     </disk>
	I1031 00:34:03.921809  256701 main.go:141] libmachine: (kindnet-740627)     <interface type='network'>
	I1031 00:34:03.921823  256701 main.go:141] libmachine: (kindnet-740627)       <source network='mk-kindnet-740627'/>
	I1031 00:34:03.921839  256701 main.go:141] libmachine: (kindnet-740627)       <model type='virtio'/>
	I1031 00:34:03.921853  256701 main.go:141] libmachine: (kindnet-740627)     </interface>
	I1031 00:34:03.921865  256701 main.go:141] libmachine: (kindnet-740627)     <interface type='network'>
	I1031 00:34:03.921878  256701 main.go:141] libmachine: (kindnet-740627)       <source network='default'/>
	I1031 00:34:03.921890  256701 main.go:141] libmachine: (kindnet-740627)       <model type='virtio'/>
	I1031 00:34:03.921904  256701 main.go:141] libmachine: (kindnet-740627)     </interface>
	I1031 00:34:03.921915  256701 main.go:141] libmachine: (kindnet-740627)     <serial type='pty'>
	I1031 00:34:03.921929  256701 main.go:141] libmachine: (kindnet-740627)       <target port='0'/>
	I1031 00:34:03.921940  256701 main.go:141] libmachine: (kindnet-740627)     </serial>
	I1031 00:34:03.921954  256701 main.go:141] libmachine: (kindnet-740627)     <console type='pty'>
	I1031 00:34:03.921967  256701 main.go:141] libmachine: (kindnet-740627)       <target type='serial' port='0'/>
	I1031 00:34:03.921979  256701 main.go:141] libmachine: (kindnet-740627)     </console>
	I1031 00:34:03.921989  256701 main.go:141] libmachine: (kindnet-740627)     <rng model='virtio'>
	I1031 00:34:03.922004  256701 main.go:141] libmachine: (kindnet-740627)       <backend model='random'>/dev/random</backend>
	I1031 00:34:03.922014  256701 main.go:141] libmachine: (kindnet-740627)     </rng>
	I1031 00:34:03.922025  256701 main.go:141] libmachine: (kindnet-740627)     
	I1031 00:34:03.922037  256701 main.go:141] libmachine: (kindnet-740627)     
	I1031 00:34:03.922049  256701 main.go:141] libmachine: (kindnet-740627)   </devices>
	I1031 00:34:03.922064  256701 main.go:141] libmachine: (kindnet-740627) </domain>
	I1031 00:34:03.922078  256701 main.go:141] libmachine: (kindnet-740627) 
	I1031 00:34:03.926508  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:ef:8d:41 in network default
	I1031 00:34:03.927199  256701 main.go:141] libmachine: (kindnet-740627) Ensuring networks are active...
	I1031 00:34:03.927237  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:03.928009  256701 main.go:141] libmachine: (kindnet-740627) Ensuring network default is active
	I1031 00:34:03.928319  256701 main.go:141] libmachine: (kindnet-740627) Ensuring network mk-kindnet-740627 is active
	I1031 00:34:03.928894  256701 main.go:141] libmachine: (kindnet-740627) Getting domain xml...
	I1031 00:34:03.929754  256701 main.go:141] libmachine: (kindnet-740627) Creating domain...
	I1031 00:34:05.304540  256701 main.go:141] libmachine: (kindnet-740627) Waiting to get IP...
	I1031 00:34:05.305675  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:05.306157  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:05.306186  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:05.306116  256725 retry.go:31] will retry after 226.494255ms: waiting for machine to come up
	I1031 00:34:05.534823  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:05.535434  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:05.535466  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:05.535346  256725 retry.go:31] will retry after 352.73313ms: waiting for machine to come up
	I1031 00:34:05.890181  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:05.890669  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:05.890706  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:05.890636  256725 retry.go:31] will retry after 299.441849ms: waiting for machine to come up
	I1031 00:34:06.192251  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:06.192769  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:06.192815  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:06.192725  256725 retry.go:31] will retry after 570.766867ms: waiting for machine to come up
	I1031 00:34:06.765858  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:06.766289  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:06.766320  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:06.766236  256725 retry.go:31] will retry after 593.922989ms: waiting for machine to come up
	I1031 00:34:07.362122  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:07.362554  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:07.362580  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:07.362490  256725 retry.go:31] will retry after 947.410978ms: waiting for machine to come up
	I1031 00:34:08.311759  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:08.312390  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:08.312419  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:08.312338  256725 retry.go:31] will retry after 737.112114ms: waiting for machine to come up
	I1031 00:34:04.594234  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:34:05.093880  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:34:05.594150  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:34:05.620977  255091 api_server.go:72] duration metric: took 2.564203267s to wait for apiserver process to appear ...
	I1031 00:34:05.621009  255091 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:34:05.621030  255091 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8443/healthz ...
	I1031 00:34:09.772645  255091 api_server.go:279] https://192.168.72.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:34:09.772682  255091 api_server.go:103] status: https://192.168.72.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:34:09.772695  255091 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8443/healthz ...
	I1031 00:34:09.831318  255091 api_server.go:279] https://192.168.72.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:34:09.831356  255091 api_server.go:103] status: https://192.168.72.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:34:10.331631  255091 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8443/healthz ...
	I1031 00:34:10.338022  255091 api_server.go:279] https://192.168.72.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:34:10.338071  255091 api_server.go:103] status: https://192.168.72.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:34:10.831588  255091 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8443/healthz ...
	I1031 00:34:10.837517  255091 api_server.go:279] https://192.168.72.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:34:10.837555  255091 api_server.go:103] status: https://192.168.72.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:34:11.332175  255091 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8443/healthz ...
	I1031 00:34:11.337812  255091 api_server.go:279] https://192.168.72.163:8443/healthz returned 200:
	ok
	I1031 00:34:11.347594  255091 api_server.go:141] control plane version: v1.28.3
	I1031 00:34:11.347626  255091 api_server.go:131] duration metric: took 5.726607643s to wait for apiserver health ...
	I1031 00:34:11.347638  255091 cni.go:84] Creating CNI manager for ""
	I1031 00:34:11.347646  255091 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:34:11.349677  255091 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:34:11.351283  255091 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:34:11.374648  255091 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:34:11.399576  255091 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:34:11.427993  255091 system_pods.go:59] 8 kube-system pods found
	I1031 00:34:11.428038  255091 system_pods.go:61] "coredns-5dd5756b68-5vq82" [8fc77f64-e3ab-426e-845a-4e65219aae48] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:34:11.428051  255091 system_pods.go:61] "etcd-newest-cni-558362" [1430b0b9-067c-4b3f-a3aa-fb9790b545d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:34:11.428103  255091 system_pods.go:61] "kube-apiserver-newest-cni-558362" [671fe208-7033-4994-a91a-c6e5d57f5d1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:34:11.428135  255091 system_pods.go:61] "kube-controller-manager-newest-cni-558362" [2a0a920f-3eb4-45c0-84d0-fd4ef7145abf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:34:11.428149  255091 system_pods.go:61] "kube-proxy-s9zbn" [0865b53a-6f53-4333-bf45-b28e432e3699] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:34:11.428162  255091 system_pods.go:61] "kube-scheduler-newest-cni-558362" [c208d455-d115-40ec-92b8-3964a435cde3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:34:11.428174  255091 system_pods.go:61] "metrics-server-57f55c9bc5-5cn72" [96b93e05-2f61-4477-8a15-a4c480c9a533] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:34:11.428190  255091 system_pods.go:61] "storage-provisioner" [2a674524-ffa3-46f3-b1e2-6694015dc87d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:34:11.428201  255091 system_pods.go:74] duration metric: took 28.601747ms to wait for pod list to return data ...
	I1031 00:34:11.428215  255091 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:34:11.432381  255091 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:34:11.432421  255091 node_conditions.go:123] node cpu capacity is 2
	I1031 00:34:11.432435  255091 node_conditions.go:105] duration metric: took 4.21388ms to run NodePressure ...
	I1031 00:34:11.432464  255091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:34:11.735945  255091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:34:11.758797  255091 ops.go:34] apiserver oom_adj: -16
	I1031 00:34:11.758829  255091 kubeadm.go:640] restartCluster took 20.53473904s
	I1031 00:34:11.758840  255091 kubeadm.go:406] StartCluster complete in 20.59173128s
	I1031 00:34:11.758865  255091 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:34:11.758955  255091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:34:11.760917  255091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:34:11.761219  255091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:34:11.761352  255091 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:34:11.761458  255091 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-558362"
	I1031 00:34:11.761474  255091 addons.go:69] Setting default-storageclass=true in profile "newest-cni-558362"
	I1031 00:34:11.761481  255091 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-558362"
	W1031 00:34:11.761489  255091 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:34:11.761495  255091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-558362"
	I1031 00:34:11.761538  255091 config.go:182] Loaded profile config "newest-cni-558362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:34:11.761545  255091 host.go:66] Checking if "newest-cni-558362" exists ...
	I1031 00:34:11.761579  255091 addons.go:69] Setting dashboard=true in profile "newest-cni-558362"
	I1031 00:34:11.761593  255091 addons.go:231] Setting addon dashboard=true in "newest-cni-558362"
	W1031 00:34:11.761601  255091 addons.go:240] addon dashboard should already be in state true
	I1031 00:34:11.761638  255091 host.go:66] Checking if "newest-cni-558362" exists ...
	I1031 00:34:11.761957  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.761969  255091 addons.go:69] Setting metrics-server=true in profile "newest-cni-558362"
	I1031 00:34:11.761980  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.761985  255091 addons.go:231] Setting addon metrics-server=true in "newest-cni-558362"
	W1031 00:34:11.761993  255091 addons.go:240] addon metrics-server should already be in state true
	I1031 00:34:11.762050  255091 host.go:66] Checking if "newest-cni-558362" exists ...
	I1031 00:34:11.762085  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.762110  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.761958  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.762363  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.762399  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.762438  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.771689  255091 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-558362" context rescaled to 1 replicas
	I1031 00:34:11.771737  255091 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:34:11.774348  255091 out.go:177] * Verifying Kubernetes components...
	I1031 00:34:11.776038  255091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:34:11.781621  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41349
	I1031 00:34:11.781820  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I1031 00:34:11.781852  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I1031 00:34:11.782111  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.782298  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.782453  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.782658  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.782689  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.782818  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.782831  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.782968  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.782987  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.783036  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.783189  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.783260  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetState
	I1031 00:34:11.783779  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.783850  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.784240  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.784770  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.784814  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.788109  255091 addons.go:231] Setting addon default-storageclass=true in "newest-cni-558362"
	W1031 00:34:11.788135  255091 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:34:11.788166  255091 host.go:66] Checking if "newest-cni-558362" exists ...
	I1031 00:34:11.788607  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.788646  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.788959  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I1031 00:34:11.789471  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.790038  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.790056  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.790466  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.790987  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.791014  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.804461  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44701
	I1031 00:34:11.804463  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I1031 00:34:11.805014  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.805056  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.805535  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.805555  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.805627  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.805637  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.805900  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.805969  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.806070  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetState
	I1031 00:34:11.806129  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetState
	I1031 00:34:11.810664  255091 main.go:141] libmachine: (newest-cni-558362) Calling .DriverName
	I1031 00:34:11.810735  255091 main.go:141] libmachine: (newest-cni-558362) Calling .DriverName
	I1031 00:34:11.812675  255091 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1031 00:34:11.815351  255091 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:34:11.817226  255091 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:34:11.817245  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:34:11.817268  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHHostname
	I1031 00:34:11.818713  255091 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1031 00:34:11.820656  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1031 00:34:11.820680  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1031 00:34:11.820700  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHHostname
	I1031 00:34:11.820872  255091 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:34:11.821760  255091 main.go:141] libmachine: (newest-cni-558362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:0f:39", ip: ""} in network mk-newest-cni-558362: {Iface:virbr1 ExpiryTime:2023-10-31 01:32:16 +0000 UTC Type:0 Mac:52:54:00:41:0f:39 Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:newest-cni-558362 Clientid:01:52:54:00:41:0f:39}
	I1031 00:34:11.821790  255091 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined IP address 192.168.72.163 and MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:34:11.821963  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHPort
	I1031 00:34:11.822166  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHKeyPath
	I1031 00:34:11.822338  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHUsername
	I1031 00:34:11.822510  255091 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/id_rsa Username:docker}
	I1031 00:34:11.823079  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45131
	I1031 00:34:11.823951  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I1031 00:34:11.824404  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.824527  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.824811  255091 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:34:11.824971  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.824999  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.825310  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.825327  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.825390  255091 main.go:141] libmachine: (newest-cni-558362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:0f:39", ip: ""} in network mk-newest-cni-558362: {Iface:virbr1 ExpiryTime:2023-10-31 01:32:16 +0000 UTC Type:0 Mac:52:54:00:41:0f:39 Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:newest-cni-558362 Clientid:01:52:54:00:41:0f:39}
	I1031 00:34:11.825410  255091 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined IP address 192.168.72.163 and MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:34:11.825526  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.825584  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHPort
	I1031 00:34:11.825698  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetState
	I1031 00:34:11.825716  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.825749  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHKeyPath
	I1031 00:34:11.825847  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHUsername
	I1031 00:34:11.825979  255091 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/id_rsa Username:docker}
	I1031 00:34:11.827745  255091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:34:11.827790  255091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:34:11.827888  255091 main.go:141] libmachine: (newest-cni-558362) Calling .DriverName
	I1031 00:34:11.830174  255091 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:34:09.050798  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:09.051175  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:09.051204  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:09.051174  256725 retry.go:31] will retry after 1.452835739s: waiting for machine to come up
	I1031 00:34:10.505473  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:10.506025  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:10.506058  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:10.505977  256725 retry.go:31] will retry after 1.752071938s: waiting for machine to come up
	I1031 00:34:12.259624  256701 main.go:141] libmachine: (kindnet-740627) DBG | domain kindnet-740627 has defined MAC address 52:54:00:16:ca:8c in network mk-kindnet-740627
	I1031 00:34:12.260170  256701 main.go:141] libmachine: (kindnet-740627) DBG | unable to find current IP address of domain kindnet-740627 in network mk-kindnet-740627
	I1031 00:34:12.260216  256701 main.go:141] libmachine: (kindnet-740627) DBG | I1031 00:34:12.260121  256725 retry.go:31] will retry after 1.575061383s: waiting for machine to come up
	I1031 00:34:11.832189  255091 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:34:11.832208  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:34:11.832226  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHHostname
	I1031 00:34:11.835110  255091 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:34:11.835433  255091 main.go:141] libmachine: (newest-cni-558362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:0f:39", ip: ""} in network mk-newest-cni-558362: {Iface:virbr1 ExpiryTime:2023-10-31 01:32:16 +0000 UTC Type:0 Mac:52:54:00:41:0f:39 Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:newest-cni-558362 Clientid:01:52:54:00:41:0f:39}
	I1031 00:34:11.835468  255091 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined IP address 192.168.72.163 and MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:34:11.835610  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHPort
	I1031 00:34:11.835809  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHKeyPath
	I1031 00:34:11.835977  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHUsername
	I1031 00:34:11.836147  255091 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/id_rsa Username:docker}
	I1031 00:34:11.850540  255091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I1031 00:34:11.851062  255091 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:34:11.851642  255091 main.go:141] libmachine: Using API Version  1
	I1031 00:34:11.851666  255091 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:34:11.852083  255091 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:34:11.852335  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetState
	I1031 00:34:11.854045  255091 main.go:141] libmachine: (newest-cni-558362) Calling .DriverName
	I1031 00:34:11.854317  255091 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:34:11.854334  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:34:11.854352  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHHostname
	I1031 00:34:11.857530  255091 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:34:11.858118  255091 main.go:141] libmachine: (newest-cni-558362) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:0f:39", ip: ""} in network mk-newest-cni-558362: {Iface:virbr1 ExpiryTime:2023-10-31 01:32:16 +0000 UTC Type:0 Mac:52:54:00:41:0f:39 Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:newest-cni-558362 Clientid:01:52:54:00:41:0f:39}
	I1031 00:34:11.858142  255091 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined IP address 192.168.72.163 and MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:34:11.858342  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHPort
	I1031 00:34:11.858546  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHKeyPath
	I1031 00:34:11.858706  255091 main.go:141] libmachine: (newest-cni-558362) Calling .GetSSHUsername
	I1031 00:34:11.858856  255091 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/id_rsa Username:docker}
	I1031 00:34:12.020629  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1031 00:34:12.020660  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1031 00:34:12.037089  255091 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:34:12.037116  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:34:12.061940  255091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:34:12.082762  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1031 00:34:12.082795  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1031 00:34:12.088256  255091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:34:12.095909  255091 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:34:12.095961  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:34:12.216286  255091 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 00:34:12.216320  255091 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:34:12.216387  255091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:34:12.273021  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1031 00:34:12.273054  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1031 00:34:12.285168  255091 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:34:12.285196  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:34:12.364414  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1031 00:34:12.364454  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1031 00:34:12.380823  255091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:34:12.440161  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1031 00:34:12.440188  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1031 00:34:12.519987  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1031 00:34:12.520021  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1031 00:34:12.562946  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1031 00:34:12.562984  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1031 00:34:12.619655  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1031 00:34:12.619747  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1031 00:34:12.646809  255091 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1031 00:34:12.646839  255091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1031 00:34:12.665598  255091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 00:12:28 UTC, ends at Tue 2023-10-31 00:34:14 UTC. --
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.773674626Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&PodSandboxMetadata{Name:busybox,Uid:ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711189811434662,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-31T00:13:01.910930187Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-dqrs4,Uid:f6d80a09-c397-4c78-a038-f07cad11de9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711189789844
081,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-31T00:13:01.910913236Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:762ffea1a5c1c5a11a92be25c05c836e7a66fc58c2f2fae6cb80c016085e71ad,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-pm6qx,Uid:5ed61015-eb88-4381-adc3-8d1f4021c6aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711189765381340,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-pm6qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed61015-eb88-4381-adc3-8d1f4021c6aa,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-31T00:13:01.
910909381Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6bce0572-aad8-4a9f-978f-9bd0ff62904a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711182259629626,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-31T00:13:01.910911359Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&PodSandboxMetadata{Name:kube-proxy-287dq,Uid:c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711182255513363,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2023-10-31T00:13:01.910916828Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-078843,Uid:202667cac640795194af9959fa18541d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711176466276948,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.2:8443,kubernetes.io/config.hash: 202667cac640795194af9959fa18541d,kubernetes.io/config.seen: 2023-10-31T00:12:55.919959023Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&PodSandboxMetadata{N
ame:kube-controller-manager-embed-certs-078843,Uid:9637d799fe724569676c9f38ab0bb286,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711176447927391,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9637d799fe724569676c9f38ab0bb286,kubernetes.io/config.seen: 2023-10-31T00:12:55.919960157Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-078843,Uid:9474a5b90c0a45ef498a0096ce5ccfa0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711176444301372,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.na
me: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9474a5b90c0a45ef498a0096ce5ccfa0,kubernetes.io/config.seen: 2023-10-31T00:12:55.919960894Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-078843,Uid:cae247f28a3a4d778946c27f65cc3d40,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698711176421323672,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.2:2379,kubernetes.io/config.hash: cae247f28a3a4d778946c27f65cc3d4
0,kubernetes.io/config.seen: 2023-10-31T00:12:55.919954956Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=0d9693c8-8c8b-4ae4-8d70-345f8cbdb051 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.774584437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c932c33f-6c5a-4ef3-8d0a-380f3fe02b85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.774677243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c932c33f-6c5a-4ef3-8d0a-380f3fe02b85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.774929427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711214186674561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776cfd1370a2ecd2ebd919bf887815461feca2c3604f89b31255cfcadd84f3,PodSandboxId:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698711192442163192,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{io.kubernetes.container.hash: ff541a11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26,PodSandboxId:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711190773895170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1cb5b569,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698711183249808394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3,PodSandboxId:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711183124950068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff
79-4cd8-ab26-a4ca2bec1fd9,},Annotations:map[string]string{io.kubernetes.container.hash: 404a6c81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6,PodSandboxId:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711177512324378,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,},Annotations:map[string
]string{io.kubernetes.container.hash: d3bd4104,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80,PodSandboxId:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711177266496863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,},Annotations:map[string]string{io
.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70,PodSandboxId:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711177144498313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033,PodSandboxId:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711177026766214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,},Annotations:map[
string]string{io.kubernetes.container.hash: 28ddfe21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c932c33f-6c5a-4ef3-8d0a-380f3fe02b85 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.820263912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=54e13ef6-e18d-4e79-a3c3-1fe9c1076eb2 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.820369388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=54e13ef6-e18d-4e79-a3c3-1fe9c1076eb2 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.821526569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=18f3e6de-bf5b-4fa6-a95c-9ddfa46829aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.821874392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712454821863794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=18f3e6de-bf5b-4fa6-a95c-9ddfa46829aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.822554732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2c332476-d199-41ff-909f-4d1a4b368192 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.822633257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2c332476-d199-41ff-909f-4d1a4b368192 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.822834139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711214186674561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776cfd1370a2ecd2ebd919bf887815461feca2c3604f89b31255cfcadd84f3,PodSandboxId:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698711192442163192,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{io.kubernetes.container.hash: ff541a11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26,PodSandboxId:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711190773895170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1cb5b569,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698711183249808394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3,PodSandboxId:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711183124950068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff
79-4cd8-ab26-a4ca2bec1fd9,},Annotations:map[string]string{io.kubernetes.container.hash: 404a6c81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6,PodSandboxId:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711177512324378,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,},Annotations:map[string
]string{io.kubernetes.container.hash: d3bd4104,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80,PodSandboxId:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711177266496863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,},Annotations:map[string]string{io
.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70,PodSandboxId:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711177144498313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033,PodSandboxId:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711177026766214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,},Annotations:map[
string]string{io.kubernetes.container.hash: 28ddfe21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2c332476-d199-41ff-909f-4d1a4b368192 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.877524483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1925fe20-a42e-468f-a6c0-09f6240cff5c name=/runtime.v1.RuntimeService/Version
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.877653911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1925fe20-a42e-468f-a6c0-09f6240cff5c name=/runtime.v1.RuntimeService/Version
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.879252728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e71217d3-49f7-40e3-9a5d-cc580a78785d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.879833550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712454879810300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e71217d3-49f7-40e3-9a5d-cc580a78785d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.880747956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b293613a-1d74-4ae6-91a6-9e7b829e8fd2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.880851847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b293613a-1d74-4ae6-91a6-9e7b829e8fd2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.881188481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711214186674561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776cfd1370a2ecd2ebd919bf887815461feca2c3604f89b31255cfcadd84f3,PodSandboxId:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698711192442163192,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{io.kubernetes.container.hash: ff541a11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26,PodSandboxId:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711190773895170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1cb5b569,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698711183249808394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3,PodSandboxId:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711183124950068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff
79-4cd8-ab26-a4ca2bec1fd9,},Annotations:map[string]string{io.kubernetes.container.hash: 404a6c81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6,PodSandboxId:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711177512324378,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,},Annotations:map[string
]string{io.kubernetes.container.hash: d3bd4104,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80,PodSandboxId:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711177266496863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,},Annotations:map[string]string{io
.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70,PodSandboxId:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711177144498313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033,PodSandboxId:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711177026766214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,},Annotations:map[
string]string{io.kubernetes.container.hash: 28ddfe21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b293613a-1d74-4ae6-91a6-9e7b829e8fd2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.921824484Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=123621df-d00e-413a-8bac-ca647864b781 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.921919075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=123621df-d00e-413a-8bac-ca647864b781 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.923247151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9530ccaa-44d3-4f0b-803d-111c45b628b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.923807985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712454923785563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9530ccaa-44d3-4f0b-803d-111c45b628b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.924465092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e31b566-3bc6-4dc8-ab85-642c0fba0196 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.924587218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e31b566-3bc6-4dc8-ab85-642c0fba0196 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:34:14 embed-certs-078843 crio[711]: time="2023-10-31 00:34:14.924872996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711214186674561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776cfd1370a2ecd2ebd919bf887815461feca2c3604f89b31255cfcadd84f3,PodSandboxId:4628a58fa00c16781c820f65bf281fbf0258cbcb3c35aa8c4c81aa24a3da3549,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698711192442163192,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac0523db-98c6-4583-8cc4-b0cd6bea7a8b,},Annotations:map[string]string{io.kubernetes.container.hash: ff541a11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26,PodSandboxId:9d31d8abd8f4effb317d559c8af3a457099773c57eb0672bd1f9f4cf2b37c89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711190773895170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-dqrs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d80a09-c397-4c78-a038-f07cad11de9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1cb5b569,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c,PodSandboxId:4f4af887bf59e4b461388c62f300ac4242670c3f543fe7d6cf6448832bd5cd69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698711183249808394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 6bce0572-aad8-4a9f-978f-9bd0ff62904a,},Annotations:map[string]string{io.kubernetes.container.hash: 7e579188,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3,PodSandboxId:e20b5a6f9a35d6c484c86d92263ff97d86c5800b46bcedb4ccfb2f987db17264,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711183124950068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-287dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c3a3a9-ff
79-4cd8-ab26-a4ca2bec1fd9,},Annotations:map[string]string{io.kubernetes.container.hash: 404a6c81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6,PodSandboxId:daf5d500c92cb215c4ce18baa548c09e9bcdfc3b49eea4a6aa14beccf7a9c342,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711177512324378,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae247f28a3a4d778946c27f65cc3d40,},Annotations:map[string
]string{io.kubernetes.container.hash: d3bd4104,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80,PodSandboxId:9c78a5ff74b936115a58fade7a3fab08bf6794745a9c21b4fee2f2244f6711f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711177266496863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9474a5b90c0a45ef498a0096ce5ccfa0,},Annotations:map[string]string{io
.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70,PodSandboxId:0663bfc12e03afc5aa5f401fd69c6a6a2980c923810da197c9f2dda022dbe417,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711177144498313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9637d799fe724569676c9f38ab0bb286,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033,PodSandboxId:0823b451eb5f8e93b0532ad5273cf195d53f6369a9c151fa3f9cb8bdcc7e5ee1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711177026766214,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-078843,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202667cac640795194af9959fa18541d,},Annotations:map[
string]string{io.kubernetes.container.hash: 28ddfe21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e31b566-3bc6-4dc8-ab85-642c0fba0196 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86e0b59eda801       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   4f4af887bf59e       storage-provisioner
	ff776cfd1370a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   4628a58fa00c1       busybox
	8e049ebc03e12       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   9d31d8abd8f4e       coredns-5dd5756b68-dqrs4
	622298cd36157       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   4f4af887bf59e       storage-provisioner
	f52fe11ae8422       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      21 minutes ago      Running             kube-proxy                1                   e20b5a6f9a35d       kube-proxy-287dq
	35bf5adca8564       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   daf5d500c92cb       etcd-embed-certs-078843
	ee4cc3844ed36       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      21 minutes ago      Running             kube-scheduler            1                   9c78a5ff74b93       kube-scheduler-embed-certs-078843
	4622dc85f3882       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      21 minutes ago      Running             kube-controller-manager   1                   0663bfc12e03a       kube-controller-manager-embed-certs-078843
	bb31ab0db497f       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      21 minutes ago      Running             kube-apiserver            1                   0823b451eb5f8       kube-apiserver-embed-certs-078843
	
	* 
	* ==> coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54179 - 36798 "HINFO IN 2334349160939681849.7017679136187254627. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011791188s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-078843
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-078843
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=embed-certs-078843
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T00_04_59_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 00:04:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-078843
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 00:34:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:33:57 +0000   Tue, 31 Oct 2023 00:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:33:57 +0000   Tue, 31 Oct 2023 00:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:33:57 +0000   Tue, 31 Oct 2023 00:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:33:57 +0000   Tue, 31 Oct 2023 00:13:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.2
	  Hostname:    embed-certs-078843
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7431126be6a247cb89e27d326eef3e05
	  System UUID:                7431126b-e6a2-47cb-89e2-7d326eef3e05
	  Boot ID:                    7caa986b-82b9-47f7-ae69-a57fee90e2a7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-dqrs4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-078843                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-078843             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-078843    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-287dq                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-078843             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-pm6qx               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-078843 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-078843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-078843 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-078843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-078843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-078843 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node embed-certs-078843 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-078843 event: Registered Node embed-certs-078843 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-078843 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-078843 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-078843 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-078843 event: Registered Node embed-certs-078843 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct31 00:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068883] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.425685] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.467577] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.159282] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.503542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.638603] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.112994] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.161515] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.125361] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.230014] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +17.569942] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[Oct31 00:13] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] <==
	* {"level":"info","ts":"2023-10-31T00:13:33.39536Z","caller":"traceutil/trace.go:171","msg":"trace[1139944266] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"138.876465ms","start":"2023-10-31T00:13:33.256463Z","end":"2023-10-31T00:13:33.395339Z","steps":["trace[1139944266] 'read index received'  (duration: 126.313353ms)","trace[1139944266] 'applied index is now lower than readState.Index'  (duration: 12.561612ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-31T00:13:33.395494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.022641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-078843\" ","response":"range_response_count:1 size:5757"}
	{"level":"info","ts":"2023-10-31T00:13:33.395545Z","caller":"traceutil/trace.go:171","msg":"trace[651639044] range","detail":"{range_begin:/registry/minions/embed-certs-078843; range_end:; response_count:1; response_revision:631; }","duration":"139.091664ms","start":"2023-10-31T00:13:33.256444Z","end":"2023-10-31T00:13:33.395536Z","steps":["trace[651639044] 'agreement among raft nodes before linearized reading'  (duration: 138.990335ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:22:59.764972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":864}
	{"level":"info","ts":"2023-10-31T00:22:59.768243Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":864,"took":"2.56006ms","hash":192751056}
	{"level":"info","ts":"2023-10-31T00:22:59.768338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":192751056,"revision":864,"compact-revision":-1}
	{"level":"info","ts":"2023-10-31T00:27:59.774113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1106}
	{"level":"info","ts":"2023-10-31T00:27:59.776257Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1106,"took":"1.795726ms","hash":1517890779}
	{"level":"info","ts":"2023-10-31T00:27:59.776326Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1517890779,"revision":1106,"compact-revision":864}
	{"level":"warn","ts":"2023-10-31T00:32:38.555984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.699333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2023-10-31T00:32:38.5568Z","caller":"traceutil/trace.go:171","msg":"trace[323961008] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1574; }","duration":"161.614092ms","start":"2023-10-31T00:32:38.395132Z","end":"2023-10-31T00:32:38.556746Z","steps":["trace[323961008] 'range keys from in-memory index tree'  (duration: 160.545442ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:32:38.679257Z","caller":"traceutil/trace.go:171","msg":"trace[678208506] transaction","detail":"{read_only:false; response_revision:1575; number_of_response:1; }","duration":"117.305523ms","start":"2023-10-31T00:32:38.561924Z","end":"2023-10-31T00:32:38.679229Z","steps":["trace[678208506] 'process raft request'  (duration: 117.007006ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:32:59.785372Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1350}
	{"level":"info","ts":"2023-10-31T00:32:59.787655Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1350,"took":"1.772953ms","hash":40000941}
	{"level":"info","ts":"2023-10-31T00:32:59.787759Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":40000941,"revision":1350,"compact-revision":1106}
	{"level":"info","ts":"2023-10-31T00:33:03.205185Z","caller":"traceutil/trace.go:171","msg":"trace[1513400727] linearizableReadLoop","detail":"{readStateIndex:1883; appliedIndex:1882; }","duration":"273.306418ms","start":"2023-10-31T00:33:02.931843Z","end":"2023-10-31T00:33:03.205149Z","steps":["trace[1513400727] 'read index received'  (duration: 272.965402ms)","trace[1513400727] 'applied index is now lower than readState.Index'  (duration: 340.409µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-31T00:33:03.205632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.819032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2023-10-31T00:33:03.205722Z","caller":"traceutil/trace.go:171","msg":"trace[1370368032] range","detail":"{range_begin:/registry/masterleases/192.168.50.2; range_end:; response_count:1; response_revision:1594; }","duration":"273.972913ms","start":"2023-10-31T00:33:02.931734Z","end":"2023-10-31T00:33:03.205707Z","steps":["trace[1370368032] 'agreement among raft nodes before linearized reading'  (duration: 273.757263ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:33:03.206732Z","caller":"traceutil/trace.go:171","msg":"trace[538335067] transaction","detail":"{read_only:false; response_revision:1594; number_of_response:1; }","duration":"379.711833ms","start":"2023-10-31T00:33:02.825691Z","end":"2023-10-31T00:33:03.205403Z","steps":["trace[538335067] 'process raft request'  (duration: 379.15898ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:33:03.206906Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:33:02.825672Z","time spent":"381.121064ms","remote":"127.0.0.1:38702","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1593 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-10-31T00:33:03.252192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.713188ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-31T00:33:03.252429Z","caller":"traceutil/trace.go:171","msg":"trace[827767305] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1594; }","duration":"299.876898ms","start":"2023-10-31T00:33:02.95245Z","end":"2023-10-31T00:33:03.252326Z","steps":["trace[827767305] 'agreement among raft nodes before linearized reading'  (duration: 254.741982ms)","trace[827767305] 'range keys from in-memory index tree'  (duration: 44.952466ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-31T00:33:03.252534Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:33:02.952433Z","time spent":"300.081698ms","remote":"127.0.0.1:38656","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-10-31T00:33:52.028529Z","caller":"traceutil/trace.go:171","msg":"trace[234563827] transaction","detail":"{read_only:false; response_revision:1634; number_of_response:1; }","duration":"531.682791ms","start":"2023-10-31T00:33:51.49682Z","end":"2023-10-31T00:33:52.028503Z","steps":["trace[234563827] 'process raft request'  (duration: 531.148322ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:33:52.028679Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:33:51.496807Z","time spent":"531.803244ms","remote":"127.0.0.1:38702","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1633 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> kernel <==
	*  00:34:15 up 21 min,  0 users,  load average: 0.15, 0.14, 0.10
	Linux embed-certs-078843 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] <==
	* I1031 00:32:01.413677       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1031 00:33:01.413603       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:33:01.580856       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:33:01.581089       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:33:01.581634       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:33:02.581711       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:33:02.581951       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:33:02.581991       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:33:02.582189       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:33:02.582227       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:33:02.583443       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:33:52.029609       1 trace.go:236] Trace[104801417]: "Update" accept:application/json, */*,audit-id:dc810368-a2d2-448f-98e5-efe3c3b733e2,client:192.168.50.2,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (31-Oct-2023 00:33:51.495) (total time: 534ms):
	Trace[104801417]: ["GuaranteedUpdate etcd3" audit-id:dc810368-a2d2-448f-98e5-efe3c3b733e2,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 533ms (00:33:51.495)
	Trace[104801417]:  ---"Txn call completed" 532ms (00:33:52.029)]
	Trace[104801417]: [534.178534ms] [534.178534ms] END
	I1031 00:34:01.414153       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:34:02.582832       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:34:02.583201       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:34:02.583248       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:34:02.584097       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:34:02.584176       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:34:02.585320       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] <==
	* I1031 00:28:45.489781       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:29:04.975909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="397.751µs"
	E1031 00:29:14.935897       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:29:15.499465       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:29:16.977382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="122.82µs"
	E1031 00:29:44.941877       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:29:45.508941       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:30:14.950272       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:30:15.521739       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:30:44.958241       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:30:45.533235       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:31:14.964229       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:31:15.543408       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:31:44.971218       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:31:45.552274       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:32:14.978092       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:32:15.561835       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:32:44.986421       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:32:45.572738       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:33:14.992876       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:33:15.580702       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:33:45.000818       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:33:45.594808       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:34:12.988880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="451.615µs"
	E1031 00:34:15.010370       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	
	* 
	* ==> kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] <==
	* I1031 00:13:03.997318       1 server_others.go:69] "Using iptables proxy"
	I1031 00:13:04.039269       1 node.go:141] Successfully retrieved node IP: 192.168.50.2
	I1031 00:13:04.287163       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 00:13:04.287227       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 00:13:04.315503       1 server_others.go:152] "Using iptables Proxier"
	I1031 00:13:04.317867       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 00:13:04.319107       1 server.go:846] "Version info" version="v1.28.3"
	I1031 00:13:04.319269       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:13:04.336942       1 config.go:315] "Starting node config controller"
	I1031 00:13:04.337205       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 00:13:04.338653       1 config.go:188] "Starting service config controller"
	I1031 00:13:04.338706       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 00:13:04.338748       1 config.go:97] "Starting endpoint slice config controller"
	I1031 00:13:04.338771       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 00:13:04.438098       1 shared_informer.go:318] Caches are synced for node config
	I1031 00:13:04.438935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 00:13:04.439092       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] <==
	* I1031 00:12:59.652169       1 serving.go:348] Generated self-signed cert in-memory
	W1031 00:13:01.525481       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1031 00:13:01.525601       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 00:13:01.525688       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1031 00:13:01.525695       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1031 00:13:01.583193       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1031 00:13:01.583240       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:13:01.584995       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1031 00:13:01.587752       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1031 00:13:01.587814       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 00:13:01.587830       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1031 00:13:01.688149       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 00:12:28 UTC, ends at Tue 2023-10-31 00:34:15 UTC. --
	Oct 31 00:31:55 embed-certs-078843 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:31:55 embed-certs-078843 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:31:58 embed-certs-078843 kubelet[918]: E1031 00:31:58.959838     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:32:10 embed-certs-078843 kubelet[918]: E1031 00:32:10.960535     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:32:25 embed-certs-078843 kubelet[918]: E1031 00:32:25.961058     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:32:39 embed-certs-078843 kubelet[918]: E1031 00:32:39.960393     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:32:52 embed-certs-078843 kubelet[918]: E1031 00:32:52.960754     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:32:55 embed-certs-078843 kubelet[918]: E1031 00:32:55.977714     918 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:32:55 embed-certs-078843 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:32:55 embed-certs-078843 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:32:55 embed-certs-078843 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:32:56 embed-certs-078843 kubelet[918]: E1031 00:32:56.001221     918 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Oct 31 00:33:04 embed-certs-078843 kubelet[918]: E1031 00:33:04.960287     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:33:18 embed-certs-078843 kubelet[918]: E1031 00:33:18.961131     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:33:32 embed-certs-078843 kubelet[918]: E1031 00:33:32.961137     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:33:47 embed-certs-078843 kubelet[918]: E1031 00:33:47.960630     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:33:55 embed-certs-078843 kubelet[918]: E1031 00:33:55.974180     918 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:33:55 embed-certs-078843 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:33:55 embed-certs-078843 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:33:55 embed-certs-078843 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:33:59 embed-certs-078843 kubelet[918]: E1031 00:33:59.978808     918 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 31 00:33:59 embed-certs-078843 kubelet[918]: E1031 00:33:59.978859     918 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 31 00:33:59 embed-certs-078843 kubelet[918]: E1031 00:33:59.979863     918 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-b7qtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-pm6qx_kube-system(5ed61015-eb88-4381-adc3-8d1f4021c6aa): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:33:59 embed-certs-078843 kubelet[918]: E1031 00:33:59.980100     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	Oct 31 00:34:12 embed-certs-078843 kubelet[918]: E1031 00:34:12.961556     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pm6qx" podUID="5ed61015-eb88-4381-adc3-8d1f4021c6aa"
	
	* 
	* ==> storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] <==
	* I1031 00:13:03.691437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1031 00:13:33.750927       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] <==
	* I1031 00:13:34.326323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 00:13:34.352712       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 00:13:34.352922       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 00:13:51.755808       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 00:13:51.756196       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-078843_f48dbb48-29f3-4d64-a9e0-34066179c473!
	I1031 00:13:51.759230       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a3d186e-da90-4734-84c0-9ae37e0e9998", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-078843_f48dbb48-29f3-4d64-a9e0-34066179c473 became leader
	I1031 00:13:51.857296       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-078843_f48dbb48-29f3-4d64-a9e0-34066179c473!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-078843 -n embed-certs-078843
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-078843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pm6qx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-078843 describe pod metrics-server-57f55c9bc5-pm6qx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-078843 describe pod metrics-server-57f55c9bc5-pm6qx: exit status 1 (104.928981ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pm6qx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-078843 describe pod metrics-server-57f55c9bc5-pm6qx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (467.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (529.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1031 00:28:31.233232  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1031 00:29:14.584153  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-31 00:36:16.398031576 +0000 UTC m=+5678.648048535
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-892233 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892233 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.92µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-892233 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-892233 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-892233 logs -n 25: (1.336479594s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | systemctl cat kubelet                                |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo cat                           | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo cat                           | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | systemctl cat docker                                 |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo cat                           | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo docker                        | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC |                     |
	|         | system info                                          |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | systemctl cat cri-docker                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo cat                           | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo cat                           | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | cri-dockerd --version                                |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC |                     |
	|         | systemctl status containerd                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | systemctl cat containerd                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo cat                           | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo cat                           | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | containerd config dump                               |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | systemctl status crio --all                          |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo                               | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo find                          | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |                |                     |                     |
	| ssh     | -p kindnet-740627 sudo crio                          | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	|         | config                                               |                           |         |                |                     |                     |
	| delete  | -p kindnet-740627                                    | kindnet-740627            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC | 31 Oct 23 00:35 UTC |
	| start   | -p enable-default-cni-740627                         | enable-default-cni-740627 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:35 UTC |                     |
	|         | --memory=3072                                        |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |                |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |                |                     |                     |
	|         | --driver=kvm2                                        |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| ssh     | -p calico-740627 pgrep -a                            | calico-740627             | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:36 UTC | 31 Oct 23 00:36 UTC |
	|         | kubelet                                              |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:35:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:35:53.650177  259617 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:35:53.650353  259617 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:35:53.650368  259617 out.go:309] Setting ErrFile to fd 2...
	I1031 00:35:53.650374  259617 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:35:53.650651  259617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:35:53.651421  259617 out.go:303] Setting JSON to false
	I1031 00:35:53.652968  259617 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29906,"bootTime":1698682648,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:35:53.653051  259617 start.go:138] virtualization: kvm guest
	I1031 00:35:53.655361  259617 out.go:177] * [enable-default-cni-740627] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:35:53.657242  259617 notify.go:220] Checking for updates...
	I1031 00:35:53.657272  259617 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:35:53.659648  259617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:35:53.661419  259617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:35:53.665202  259617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:35:53.666599  259617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:35:53.668025  259617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:35:50.966604  257603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:35:51.466182  257603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:35:51.965733  257603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:35:52.466250  257603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:35:52.966203  257603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:35:53.466214  257603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:35:53.632227  257603 kubeadm.go:1081] duration metric: took 10.058892049s to wait for elevateKubeSystemPrivileges.
	I1031 00:35:53.632261  257603 kubeadm.go:406] StartCluster complete in 24.384822594s
	I1031 00:35:53.632285  257603 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:35:53.632379  257603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:35:53.634174  257603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:35:53.636540  257603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:35:53.636796  257603 config.go:182] Loaded profile config "custom-flannel-740627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:35:53.636858  257603 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:35:53.636933  257603 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-740627"
	I1031 00:35:53.637000  257603 addons.go:231] Setting addon storage-provisioner=true in "custom-flannel-740627"
	I1031 00:35:53.637048  257603 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-740627"
	I1031 00:35:53.637080  257603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-740627"
	I1031 00:35:53.637057  257603 host.go:66] Checking if "custom-flannel-740627" exists ...
	I1031 00:35:53.637622  257603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:35:53.637639  257603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:35:53.637668  257603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:35:53.637693  257603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:35:53.657609  257603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I1031 00:35:53.658051  257603 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:35:53.658787  257603 main.go:141] libmachine: Using API Version  1
	I1031 00:35:53.658830  257603 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:35:53.659561  257603 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:35:53.659925  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetState
	I1031 00:35:53.664011  257603 addons.go:231] Setting addon default-storageclass=true in "custom-flannel-740627"
	I1031 00:35:53.664061  257603 host.go:66] Checking if "custom-flannel-740627" exists ...
	I1031 00:35:53.664529  257603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:35:53.664571  257603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:35:53.664807  257603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33103
	I1031 00:35:53.665314  257603 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:35:53.665903  257603 main.go:141] libmachine: Using API Version  1
	I1031 00:35:53.665926  257603 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:35:53.666443  257603 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:35:53.667091  257603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:35:53.667136  257603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:35:53.685462  257603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I1031 00:35:53.686017  257603 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:35:53.686543  257603 main.go:141] libmachine: Using API Version  1
	I1031 00:35:53.686573  257603 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:35:53.686947  257603 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:35:53.687599  257603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:35:53.687646  257603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:35:53.687906  257603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I1031 00:35:53.688349  257603 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:35:53.688842  257603 main.go:141] libmachine: Using API Version  1
	I1031 00:35:53.688864  257603 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:35:53.689199  257603 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:35:53.689389  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetState
	I1031 00:35:53.691119  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .DriverName
	I1031 00:35:53.692923  257603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:35:53.669871  259617 config.go:182] Loaded profile config "calico-740627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:35:53.669989  259617 config.go:182] Loaded profile config "custom-flannel-740627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:35:53.670079  259617 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:35:53.670182  259617 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:35:53.724896  259617 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 00:35:53.694277  257603 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:35:53.694297  257603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:35:53.694323  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetSSHHostname
	I1031 00:35:53.702624  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | domain custom-flannel-740627 has defined MAC address 52:54:00:89:12:ac in network mk-custom-flannel-740627
	I1031 00:35:53.704404  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:12:ac", ip: ""} in network mk-custom-flannel-740627: {Iface:virbr2 ExpiryTime:2023-10-31 01:35:12 +0000 UTC Type:0 Mac:52:54:00:89:12:ac Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:custom-flannel-740627 Clientid:01:52:54:00:89:12:ac}
	I1031 00:35:53.704434  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | domain custom-flannel-740627 has defined IP address 192.168.72.48 and MAC address 52:54:00:89:12:ac in network mk-custom-flannel-740627
	I1031 00:35:53.704833  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetSSHPort
	I1031 00:35:53.705156  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetSSHKeyPath
	I1031 00:35:53.705783  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetSSHUsername
	I1031 00:35:53.706014  257603 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/custom-flannel-740627/id_rsa Username:docker}
	I1031 00:35:53.706579  257603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I1031 00:35:53.706997  257603 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:35:53.707580  257603 main.go:141] libmachine: Using API Version  1
	I1031 00:35:53.707601  257603 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:35:53.707937  257603 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:35:53.708139  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetState
	I1031 00:35:53.710108  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .DriverName
	I1031 00:35:53.710462  257603 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:35:53.710482  257603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:35:53.710507  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetSSHHostname
	I1031 00:35:53.714142  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | domain custom-flannel-740627 has defined MAC address 52:54:00:89:12:ac in network mk-custom-flannel-740627
	I1031 00:35:53.714645  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:12:ac", ip: ""} in network mk-custom-flannel-740627: {Iface:virbr2 ExpiryTime:2023-10-31 01:35:12 +0000 UTC Type:0 Mac:52:54:00:89:12:ac Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:custom-flannel-740627 Clientid:01:52:54:00:89:12:ac}
	I1031 00:35:53.714670  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | domain custom-flannel-740627 has defined IP address 192.168.72.48 and MAC address 52:54:00:89:12:ac in network mk-custom-flannel-740627
	I1031 00:35:53.714926  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetSSHPort
	I1031 00:35:53.715122  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetSSHKeyPath
	I1031 00:35:53.715305  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .GetSSHUsername
	I1031 00:35:53.715696  257603 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/custom-flannel-740627/id_rsa Username:docker}
	I1031 00:35:53.729173  257603 kapi.go:248] "coredns" deployment in "kube-system" namespace and "custom-flannel-740627" context rescaled to 1 replicas
	I1031 00:35:53.729202  257603 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:35:53.730931  257603 out.go:177] * Verifying Kubernetes components...
	I1031 00:35:53.726443  259617 start.go:298] selected driver: kvm2
	I1031 00:35:53.726457  259617 start.go:902] validating driver "kvm2" against <nil>
	I1031 00:35:53.726468  259617 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:35:53.727121  259617 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:35:53.727213  259617 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:35:53.743873  259617 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:35:53.743917  259617 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	E1031 00:35:53.744124  259617 start_flags.go:465] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1031 00:35:53.744147  259617 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 00:35:53.744167  259617 cni.go:84] Creating CNI manager for "bridge"
	I1031 00:35:53.744172  259617 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1031 00:35:53.744181  259617 start_flags.go:323] config:
	{Name:enable-default-cni-740627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:enable-default-cni-740627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:35:53.744354  259617 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:35:53.746037  259617 out.go:177] * Starting control plane node enable-default-cni-740627 in cluster enable-default-cni-740627
	I1031 00:35:53.747633  259617 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:35:53.747672  259617 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:35:53.747686  259617 cache.go:56] Caching tarball of preloaded images
	I1031 00:35:53.747807  259617 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:35:53.747819  259617 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:35:53.747949  259617 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/enable-default-cni-740627/config.json ...
	I1031 00:35:53.747976  259617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/enable-default-cni-740627/config.json: {Name:mkbe99c87d1f302eb63131a4318f07e145aff2ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:35:53.748141  259617 start.go:365] acquiring machines lock for enable-default-cni-740627: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:35:53.748182  259617 start.go:369] acquired machines lock for "enable-default-cni-740627" in 20.743µs
	I1031 00:35:53.748206  259617 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-740627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.3 ClusterName:enable-default-cni-740627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:35:53.748288  259617 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 00:35:53.732465  257603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:35:53.827051  257603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:35:53.828504  257603 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-740627" to be "Ready" ...
	I1031 00:35:53.892510  257603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:35:53.931051  257603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:35:54.797454  257603 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1031 00:35:55.114935  257603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.222377952s)
	I1031 00:35:55.114961  257603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18387292s)
	I1031 00:35:55.115001  257603 main.go:141] libmachine: Making call to close driver server
	I1031 00:35:55.115007  257603 main.go:141] libmachine: Making call to close driver server
	I1031 00:35:55.115016  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .Close
	I1031 00:35:55.115021  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .Close
	I1031 00:35:55.115520  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | Closing plugin on server side
	I1031 00:35:55.115535  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | Closing plugin on server side
	I1031 00:35:55.115560  257603 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:35:55.115570  257603 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:35:55.115580  257603 main.go:141] libmachine: Making call to close driver server
	I1031 00:35:55.115580  257603 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:35:55.115589  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .Close
	I1031 00:35:55.115591  257603 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:35:55.115602  257603 main.go:141] libmachine: Making call to close driver server
	I1031 00:35:55.115611  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .Close
	I1031 00:35:55.117648  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | Closing plugin on server side
	I1031 00:35:55.117682  257603 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:35:55.117692  257603 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:35:55.117733  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | Closing plugin on server side
	I1031 00:35:55.117829  257603 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:35:55.117864  257603 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:35:55.137385  257603 main.go:141] libmachine: Making call to close driver server
	I1031 00:35:55.137413  257603 main.go:141] libmachine: (custom-flannel-740627) Calling .Close
	I1031 00:35:55.137712  257603 main.go:141] libmachine: (custom-flannel-740627) DBG | Closing plugin on server side
	I1031 00:35:55.137735  257603 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:35:55.137746  257603 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:35:55.140015  257603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 00:35:55.141742  257603 addons.go:502] enable addons completed in 1.504883445s: enabled=[storage-provisioner default-storageclass]
	I1031 00:35:54.361325  257291 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9pzql" in "kube-system" namespace has status "Ready":"False"
	I1031 00:35:56.370962  257291 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9pzql" in "kube-system" namespace has status "Ready":"False"
	I1031 00:35:53.750178  259617 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1031 00:35:53.750338  259617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:35:53.750387  259617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:35:53.766529  259617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36345
	I1031 00:35:53.767107  259617 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:35:53.767907  259617 main.go:141] libmachine: Using API Version  1
	I1031 00:35:53.767939  259617 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:35:53.768340  259617 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:35:53.768572  259617 main.go:141] libmachine: (enable-default-cni-740627) Calling .GetMachineName
	I1031 00:35:53.768748  259617 main.go:141] libmachine: (enable-default-cni-740627) Calling .DriverName
	I1031 00:35:53.769072  259617 start.go:159] libmachine.API.Create for "enable-default-cni-740627" (driver="kvm2")
	I1031 00:35:53.769128  259617 client.go:168] LocalClient.Create starting
	I1031 00:35:53.769168  259617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem
	I1031 00:35:53.769220  259617 main.go:141] libmachine: Decoding PEM data...
	I1031 00:35:53.769244  259617 main.go:141] libmachine: Parsing certificate...
	I1031 00:35:53.769322  259617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem
	I1031 00:35:53.769354  259617 main.go:141] libmachine: Decoding PEM data...
	I1031 00:35:53.769371  259617 main.go:141] libmachine: Parsing certificate...
	I1031 00:35:53.769404  259617 main.go:141] libmachine: Running pre-create checks...
	I1031 00:35:53.769418  259617 main.go:141] libmachine: (enable-default-cni-740627) Calling .PreCreateCheck
	I1031 00:35:53.769884  259617 main.go:141] libmachine: (enable-default-cni-740627) Calling .GetConfigRaw
	I1031 00:35:53.770415  259617 main.go:141] libmachine: Creating machine...
	I1031 00:35:53.770435  259617 main.go:141] libmachine: (enable-default-cni-740627) Calling .Create
	I1031 00:35:53.770600  259617 main.go:141] libmachine: (enable-default-cni-740627) Creating KVM machine...
	I1031 00:35:53.772110  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | found existing default KVM network
	I1031 00:35:53.773899  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:53.773689  259670 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:97:44:a3} reservation:<nil>}
	I1031 00:35:53.775391  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:53.775283  259670 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8a:93:a3} reservation:<nil>}
	I1031 00:35:53.776867  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:53.776784  259670 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205c40}
	I1031 00:35:53.783744  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | trying to create private KVM network mk-enable-default-cni-740627 192.168.61.0/24...
	I1031 00:35:53.882724  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | private KVM network mk-enable-default-cni-740627 192.168.61.0/24 created
	I1031 00:35:53.882787  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:53.882689  259670 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:35:53.882817  259617 main.go:141] libmachine: (enable-default-cni-740627) Setting up store path in /home/jenkins/minikube-integration/17527-208817/.minikube/machines/enable-default-cni-740627 ...
	I1031 00:35:53.882844  259617 main.go:141] libmachine: (enable-default-cni-740627) Building disk image from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso
	I1031 00:35:53.883021  259617 main.go:141] libmachine: (enable-default-cni-740627) Downloading /home/jenkins/minikube-integration/17527-208817/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso...
	I1031 00:35:54.125037  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:54.124826  259670 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/enable-default-cni-740627/id_rsa...
	I1031 00:35:54.269581  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:54.269409  259670 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/enable-default-cni-740627/enable-default-cni-740627.rawdisk...
	I1031 00:35:54.269623  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Writing magic tar header
	I1031 00:35:54.269642  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Writing SSH key tar header
	I1031 00:35:54.269658  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:54.269546  259670 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/enable-default-cni-740627 ...
	I1031 00:35:54.269683  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/enable-default-cni-740627
	I1031 00:35:54.269739  259617 main.go:141] libmachine: (enable-default-cni-740627) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/enable-default-cni-740627 (perms=drwx------)
	I1031 00:35:54.269758  259617 main.go:141] libmachine: (enable-default-cni-740627) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines (perms=drwxr-xr-x)
	I1031 00:35:54.269781  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines
	I1031 00:35:54.269803  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:35:54.269817  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817
	I1031 00:35:54.269841  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 00:35:54.269876  259617 main.go:141] libmachine: (enable-default-cni-740627) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube (perms=drwxr-xr-x)
	I1031 00:35:54.269891  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Checking permissions on dir: /home/jenkins
	I1031 00:35:54.269903  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Checking permissions on dir: /home
	I1031 00:35:54.269913  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | Skipping /home - not owner
	I1031 00:35:54.269952  259617 main.go:141] libmachine: (enable-default-cni-740627) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817 (perms=drwxrwxr-x)
	I1031 00:35:54.269990  259617 main.go:141] libmachine: (enable-default-cni-740627) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 00:35:54.270009  259617 main.go:141] libmachine: (enable-default-cni-740627) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 00:35:54.270031  259617 main.go:141] libmachine: (enable-default-cni-740627) Creating domain...
	I1031 00:35:54.271065  259617 main.go:141] libmachine: (enable-default-cni-740627) define libvirt domain using xml: 
	I1031 00:35:54.271085  259617 main.go:141] libmachine: (enable-default-cni-740627) <domain type='kvm'>
	I1031 00:35:54.271093  259617 main.go:141] libmachine: (enable-default-cni-740627)   <name>enable-default-cni-740627</name>
	I1031 00:35:54.271101  259617 main.go:141] libmachine: (enable-default-cni-740627)   <memory unit='MiB'>3072</memory>
	I1031 00:35:54.271109  259617 main.go:141] libmachine: (enable-default-cni-740627)   <vcpu>2</vcpu>
	I1031 00:35:54.271114  259617 main.go:141] libmachine: (enable-default-cni-740627)   <features>
	I1031 00:35:54.271120  259617 main.go:141] libmachine: (enable-default-cni-740627)     <acpi/>
	I1031 00:35:54.271124  259617 main.go:141] libmachine: (enable-default-cni-740627)     <apic/>
	I1031 00:35:54.271131  259617 main.go:141] libmachine: (enable-default-cni-740627)     <pae/>
	I1031 00:35:54.271139  259617 main.go:141] libmachine: (enable-default-cni-740627)     
	I1031 00:35:54.271149  259617 main.go:141] libmachine: (enable-default-cni-740627)   </features>
	I1031 00:35:54.271159  259617 main.go:141] libmachine: (enable-default-cni-740627)   <cpu mode='host-passthrough'>
	I1031 00:35:54.271168  259617 main.go:141] libmachine: (enable-default-cni-740627)   
	I1031 00:35:54.271237  259617 main.go:141] libmachine: (enable-default-cni-740627)   </cpu>
	I1031 00:35:54.271255  259617 main.go:141] libmachine: (enable-default-cni-740627)   <os>
	I1031 00:35:54.271268  259617 main.go:141] libmachine: (enable-default-cni-740627)     <type>hvm</type>
	I1031 00:35:54.271280  259617 main.go:141] libmachine: (enable-default-cni-740627)     <boot dev='cdrom'/>
	I1031 00:35:54.271291  259617 main.go:141] libmachine: (enable-default-cni-740627)     <boot dev='hd'/>
	I1031 00:35:54.271300  259617 main.go:141] libmachine: (enable-default-cni-740627)     <bootmenu enable='no'/>
	I1031 00:35:54.271306  259617 main.go:141] libmachine: (enable-default-cni-740627)   </os>
	I1031 00:35:54.271313  259617 main.go:141] libmachine: (enable-default-cni-740627)   <devices>
	I1031 00:35:54.271519  259617 main.go:141] libmachine: (enable-default-cni-740627)     <disk type='file' device='cdrom'>
	I1031 00:35:54.271867  259617 main.go:141] libmachine: (enable-default-cni-740627)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/enable-default-cni-740627/boot2docker.iso'/>
	I1031 00:35:54.271915  259617 main.go:141] libmachine: (enable-default-cni-740627)       <target dev='hdc' bus='scsi'/>
	I1031 00:35:54.271936  259617 main.go:141] libmachine: (enable-default-cni-740627)       <readonly/>
	I1031 00:35:54.271948  259617 main.go:141] libmachine: (enable-default-cni-740627)     </disk>
	I1031 00:35:54.271958  259617 main.go:141] libmachine: (enable-default-cni-740627)     <disk type='file' device='disk'>
	I1031 00:35:54.271980  259617 main.go:141] libmachine: (enable-default-cni-740627)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 00:35:54.272002  259617 main.go:141] libmachine: (enable-default-cni-740627)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/enable-default-cni-740627/enable-default-cni-740627.rawdisk'/>
	I1031 00:35:54.272019  259617 main.go:141] libmachine: (enable-default-cni-740627)       <target dev='hda' bus='virtio'/>
	I1031 00:35:54.272029  259617 main.go:141] libmachine: (enable-default-cni-740627)     </disk>
	I1031 00:35:54.272044  259617 main.go:141] libmachine: (enable-default-cni-740627)     <interface type='network'>
	I1031 00:35:54.272063  259617 main.go:141] libmachine: (enable-default-cni-740627)       <source network='mk-enable-default-cni-740627'/>
	I1031 00:35:54.272075  259617 main.go:141] libmachine: (enable-default-cni-740627)       <model type='virtio'/>
	I1031 00:35:54.272084  259617 main.go:141] libmachine: (enable-default-cni-740627)     </interface>
	I1031 00:35:54.272100  259617 main.go:141] libmachine: (enable-default-cni-740627)     <interface type='network'>
	I1031 00:35:54.272110  259617 main.go:141] libmachine: (enable-default-cni-740627)       <source network='default'/>
	I1031 00:35:54.272128  259617 main.go:141] libmachine: (enable-default-cni-740627)       <model type='virtio'/>
	I1031 00:35:54.272137  259617 main.go:141] libmachine: (enable-default-cni-740627)     </interface>
	I1031 00:35:54.272151  259617 main.go:141] libmachine: (enable-default-cni-740627)     <serial type='pty'>
	I1031 00:35:54.272165  259617 main.go:141] libmachine: (enable-default-cni-740627)       <target port='0'/>
	I1031 00:35:54.272176  259617 main.go:141] libmachine: (enable-default-cni-740627)     </serial>
	I1031 00:35:54.272186  259617 main.go:141] libmachine: (enable-default-cni-740627)     <console type='pty'>
	I1031 00:35:54.272202  259617 main.go:141] libmachine: (enable-default-cni-740627)       <target type='serial' port='0'/>
	I1031 00:35:54.272211  259617 main.go:141] libmachine: (enable-default-cni-740627)     </console>
	I1031 00:35:54.272227  259617 main.go:141] libmachine: (enable-default-cni-740627)     <rng model='virtio'>
	I1031 00:35:54.272237  259617 main.go:141] libmachine: (enable-default-cni-740627)       <backend model='random'>/dev/random</backend>
	I1031 00:35:54.272248  259617 main.go:141] libmachine: (enable-default-cni-740627)     </rng>
	I1031 00:35:54.272257  259617 main.go:141] libmachine: (enable-default-cni-740627)     
	I1031 00:35:54.272272  259617 main.go:141] libmachine: (enable-default-cni-740627)     
	I1031 00:35:54.272281  259617 main.go:141] libmachine: (enable-default-cni-740627)   </devices>
	I1031 00:35:54.272296  259617 main.go:141] libmachine: (enable-default-cni-740627) </domain>
	I1031 00:35:54.272304  259617 main.go:141] libmachine: (enable-default-cni-740627) 
	I1031 00:35:54.277403  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:70:06:65 in network default
	I1031 00:35:54.278117  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:54.278141  259617 main.go:141] libmachine: (enable-default-cni-740627) Ensuring networks are active...
	I1031 00:35:54.278898  259617 main.go:141] libmachine: (enable-default-cni-740627) Ensuring network default is active
	I1031 00:35:54.279286  259617 main.go:141] libmachine: (enable-default-cni-740627) Ensuring network mk-enable-default-cni-740627 is active
	I1031 00:35:54.279786  259617 main.go:141] libmachine: (enable-default-cni-740627) Getting domain xml...
	I1031 00:35:54.280595  259617 main.go:141] libmachine: (enable-default-cni-740627) Creating domain...
	I1031 00:35:55.735602  259617 main.go:141] libmachine: (enable-default-cni-740627) Waiting to get IP...
	I1031 00:35:55.736754  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:55.737297  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:35:55.737329  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:55.737262  259670 retry.go:31] will retry after 204.041754ms: waiting for machine to come up
	I1031 00:35:55.942946  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:55.943448  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:35:55.943482  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:55.943406  259670 retry.go:31] will retry after 389.086151ms: waiting for machine to come up
	I1031 00:35:56.334356  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:56.334872  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:35:56.334934  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:56.334812  259670 retry.go:31] will retry after 438.427276ms: waiting for machine to come up
	I1031 00:35:56.774490  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:56.775106  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:35:56.775137  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:56.775028  259670 retry.go:31] will retry after 578.090068ms: waiting for machine to come up
	I1031 00:35:57.354832  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:57.355686  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:35:57.355885  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:57.355792  259670 retry.go:31] will retry after 486.005448ms: waiting for machine to come up
	I1031 00:35:57.843857  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:57.844489  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:35:57.844518  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:57.844413  259670 retry.go:31] will retry after 739.70562ms: waiting for machine to come up
	I1031 00:35:58.585539  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:58.586215  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:35:58.586248  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:58.586136  259670 retry.go:31] will retry after 1.014921703s: waiting for machine to come up
	I1031 00:35:55.848352  257603 node_ready.go:58] node "custom-flannel-740627" has status "Ready":"False"
	I1031 00:35:58.345424  257603 node_ready.go:58] node "custom-flannel-740627" has status "Ready":"False"
	I1031 00:36:00.346318  257603 node_ready.go:58] node "custom-flannel-740627" has status "Ready":"False"
	I1031 00:35:58.860581  257291 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9pzql" in "kube-system" namespace has status "Ready":"False"
	I1031 00:36:00.862632  257291 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9pzql" in "kube-system" namespace has status "Ready":"False"
	I1031 00:36:01.361239  257291 pod_ready.go:92] pod "calico-kube-controllers-558d465845-9pzql" in "kube-system" namespace has status "Ready":"True"
	I1031 00:36:01.361269  257291 pod_ready.go:81] duration metric: took 22.526206897s waiting for pod "calico-kube-controllers-558d465845-9pzql" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.361283  257291 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-9gsn4" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.368186  257291 pod_ready.go:92] pod "calico-node-9gsn4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:36:01.368211  257291 pod_ready.go:81] duration metric: took 6.919364ms waiting for pod "calico-node-9gsn4" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.368223  257291 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-r5f2n" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.375185  257291 pod_ready.go:92] pod "coredns-5dd5756b68-r5f2n" in "kube-system" namespace has status "Ready":"True"
	I1031 00:36:01.375208  257291 pod_ready.go:81] duration metric: took 6.976685ms waiting for pod "coredns-5dd5756b68-r5f2n" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.375219  257291 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-740627" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.381388  257291 pod_ready.go:92] pod "etcd-calico-740627" in "kube-system" namespace has status "Ready":"True"
	I1031 00:36:01.381466  257291 pod_ready.go:81] duration metric: took 6.238369ms waiting for pod "etcd-calico-740627" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.381485  257291 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-740627" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.387496  257291 pod_ready.go:92] pod "kube-apiserver-calico-740627" in "kube-system" namespace has status "Ready":"True"
	I1031 00:36:01.387524  257291 pod_ready.go:81] duration metric: took 6.028984ms waiting for pod "kube-apiserver-calico-740627" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.387537  257291 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-740627" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.756754  257291 pod_ready.go:92] pod "kube-controller-manager-calico-740627" in "kube-system" namespace has status "Ready":"True"
	I1031 00:36:01.756787  257291 pod_ready.go:81] duration metric: took 369.240692ms waiting for pod "kube-controller-manager-calico-740627" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:01.756802  257291 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-jq9hw" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:02.156514  257291 pod_ready.go:92] pod "kube-proxy-jq9hw" in "kube-system" namespace has status "Ready":"True"
	I1031 00:36:02.156536  257291 pod_ready.go:81] duration metric: took 399.727851ms waiting for pod "kube-proxy-jq9hw" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:02.156546  257291 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-740627" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:02.557496  257291 pod_ready.go:92] pod "kube-scheduler-calico-740627" in "kube-system" namespace has status "Ready":"True"
	I1031 00:36:02.557600  257291 pod_ready.go:81] duration metric: took 401.044423ms waiting for pod "kube-scheduler-calico-740627" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:02.557623  257291 pod_ready.go:38] duration metric: took 23.734906107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:36:02.557666  257291 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:36:02.557791  257291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:36:02.580521  257291 api_server.go:72] duration metric: took 33.003777388s to wait for apiserver process to appear ...
	I1031 00:36:02.580545  257291 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:36:02.580571  257291 api_server.go:253] Checking apiserver healthz at https://192.168.50.182:8443/healthz ...
	I1031 00:36:02.589537  257291 api_server.go:279] https://192.168.50.182:8443/healthz returned 200:
	ok
	I1031 00:36:02.591780  257291 api_server.go:141] control plane version: v1.28.3
	I1031 00:36:02.591812  257291 api_server.go:131] duration metric: took 11.259848ms to wait for apiserver health ...
	I1031 00:36:02.591824  257291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:36:02.764239  257291 system_pods.go:59] 9 kube-system pods found
	I1031 00:36:02.764274  257291 system_pods.go:61] "calico-kube-controllers-558d465845-9pzql" [d0ae971e-ee09-42b5-9f86-7aa2f9cd0b65] Running
	I1031 00:36:02.764281  257291 system_pods.go:61] "calico-node-9gsn4" [7d41b2a9-8eeb-4c56-b394-ea94612e8dbf] Running
	I1031 00:36:02.764285  257291 system_pods.go:61] "coredns-5dd5756b68-r5f2n" [a1549442-d07c-4d59-bd30-39c01e5f5177] Running
	I1031 00:36:02.764289  257291 system_pods.go:61] "etcd-calico-740627" [caf14a26-be6f-44e9-b363-dbe5e156d8c0] Running
	I1031 00:36:02.764293  257291 system_pods.go:61] "kube-apiserver-calico-740627" [a37fc4a0-b835-4f91-8f57-e8261f35142f] Running
	I1031 00:36:02.764297  257291 system_pods.go:61] "kube-controller-manager-calico-740627" [3e687d99-cf94-41c8-b965-a8f5a500e558] Running
	I1031 00:36:02.764301  257291 system_pods.go:61] "kube-proxy-jq9hw" [fb0239f1-6853-41a6-8a24-8c1d30bc98dd] Running
	I1031 00:36:02.764305  257291 system_pods.go:61] "kube-scheduler-calico-740627" [00e3f1bf-06d7-438d-b4ce-99667b64128b] Running
	I1031 00:36:02.764308  257291 system_pods.go:61] "storage-provisioner" [a7145b2b-54d3-4090-8f9f-1c1ac3c2e4d6] Running
	I1031 00:36:02.764315  257291 system_pods.go:74] duration metric: took 172.484161ms to wait for pod list to return data ...
	I1031 00:36:02.764322  257291 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:36:02.956080  257291 default_sa.go:45] found service account: "default"
	I1031 00:36:02.956108  257291 default_sa.go:55] duration metric: took 191.779067ms for default service account to be created ...
	I1031 00:36:02.956116  257291 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:36:03.164108  257291 system_pods.go:86] 9 kube-system pods found
	I1031 00:36:03.164145  257291 system_pods.go:89] "calico-kube-controllers-558d465845-9pzql" [d0ae971e-ee09-42b5-9f86-7aa2f9cd0b65] Running
	I1031 00:36:03.164156  257291 system_pods.go:89] "calico-node-9gsn4" [7d41b2a9-8eeb-4c56-b394-ea94612e8dbf] Running
	I1031 00:36:03.164160  257291 system_pods.go:89] "coredns-5dd5756b68-r5f2n" [a1549442-d07c-4d59-bd30-39c01e5f5177] Running
	I1031 00:36:03.164164  257291 system_pods.go:89] "etcd-calico-740627" [caf14a26-be6f-44e9-b363-dbe5e156d8c0] Running
	I1031 00:36:03.164168  257291 system_pods.go:89] "kube-apiserver-calico-740627" [a37fc4a0-b835-4f91-8f57-e8261f35142f] Running
	I1031 00:36:03.164172  257291 system_pods.go:89] "kube-controller-manager-calico-740627" [3e687d99-cf94-41c8-b965-a8f5a500e558] Running
	I1031 00:36:03.164176  257291 system_pods.go:89] "kube-proxy-jq9hw" [fb0239f1-6853-41a6-8a24-8c1d30bc98dd] Running
	I1031 00:36:03.164180  257291 system_pods.go:89] "kube-scheduler-calico-740627" [00e3f1bf-06d7-438d-b4ce-99667b64128b] Running
	I1031 00:36:03.164184  257291 system_pods.go:89] "storage-provisioner" [a7145b2b-54d3-4090-8f9f-1c1ac3c2e4d6] Running
	I1031 00:36:03.164191  257291 system_pods.go:126] duration metric: took 208.069841ms to wait for k8s-apps to be running ...
	I1031 00:36:03.164199  257291 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:36:03.164247  257291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:36:03.181134  257291 system_svc.go:56] duration metric: took 16.918014ms WaitForService to wait for kubelet.
	I1031 00:36:03.181170  257291 kubeadm.go:581] duration metric: took 33.604435023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:36:03.181198  257291 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:36:03.357684  257291 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:36:03.357722  257291 node_conditions.go:123] node cpu capacity is 2
	I1031 00:36:03.357735  257291 node_conditions.go:105] duration metric: took 176.531077ms to run NodePressure ...
	I1031 00:36:03.357748  257291 start.go:228] waiting for startup goroutines ...
	I1031 00:36:03.357753  257291 start.go:233] waiting for cluster config update ...
	I1031 00:36:03.357764  257291 start.go:242] writing updated cluster config ...
	I1031 00:36:03.358050  257291 ssh_runner.go:195] Run: rm -f paused
	I1031 00:36:03.421673  257291 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:36:03.423836  257291 out.go:177] * Done! kubectl is now configured to use "calico-740627" cluster and "default" namespace by default
	I1031 00:35:59.602389  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:35:59.602988  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:35:59.603020  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:35:59.602964  259670 retry.go:31] will retry after 1.272114332s: waiting for machine to come up
	I1031 00:36:00.876321  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:36:00.876873  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:36:00.876906  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:36:00.876844  259670 retry.go:31] will retry after 1.6911794s: waiting for machine to come up
	I1031 00:36:02.570355  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:36:02.570805  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:36:02.570828  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:36:02.570757  259670 retry.go:31] will retry after 1.772964864s: waiting for machine to come up
	I1031 00:36:00.846317  257603 node_ready.go:49] node "custom-flannel-740627" has status "Ready":"True"
	I1031 00:36:00.846348  257603 node_ready.go:38] duration metric: took 7.017813288s waiting for node "custom-flannel-740627" to be "Ready" ...
	I1031 00:36:00.846369  257603 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:36:00.861173  257603 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-w82gx" in "kube-system" namespace to be "Ready" ...
	I1031 00:36:02.886481  257603 pod_ready.go:102] pod "coredns-5dd5756b68-w82gx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:36:04.889615  257603 pod_ready.go:102] pod "coredns-5dd5756b68-w82gx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:36:04.346021  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:36:04.346671  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:36:04.346698  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:36:04.346618  259670 retry.go:31] will retry after 2.301906453s: waiting for machine to come up
	I1031 00:36:06.650970  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:36:06.651432  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:36:06.651464  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:36:06.651387  259670 retry.go:31] will retry after 2.928416898s: waiting for machine to come up
	I1031 00:36:07.386290  257603 pod_ready.go:102] pod "coredns-5dd5756b68-w82gx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:36:09.387461  257603 pod_ready.go:102] pod "coredns-5dd5756b68-w82gx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:36:09.582030  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:36:09.582595  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:36:09.582631  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:36:09.582552  259670 retry.go:31] will retry after 4.002404182s: waiting for machine to come up
	I1031 00:36:13.586970  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | domain enable-default-cni-740627 has defined MAC address 52:54:00:35:40:c2 in network mk-enable-default-cni-740627
	I1031 00:36:13.587560  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | unable to find current IP address of domain enable-default-cni-740627 in network mk-enable-default-cni-740627
	I1031 00:36:13.587592  259617 main.go:141] libmachine: (enable-default-cni-740627) DBG | I1031 00:36:13.587497  259670 retry.go:31] will retry after 3.496331913s: waiting for machine to come up
	I1031 00:36:11.889069  257603 pod_ready.go:102] pod "coredns-5dd5756b68-w82gx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:36:14.385613  257603 pod_ready.go:102] pod "coredns-5dd5756b68-w82gx" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 00:12:49 UTC, ends at Tue 2023-10-31 00:36:17 UTC. --
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.144689959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712577144677853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7a92c3d5-4e19-49cc-aed0-eee192b391e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.145503786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5337f86-53fc-4f4b-8b02-b6aa66c5c773 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.145594268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5337f86-53fc-4f4b-8b02-b6aa66c5c773 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.145744082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11,PodSandboxId:c2da5d55b35eef79c0f1d94dca3535d4791cfd0aa646756a2aeb2fde5a160852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711503371081417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d33e4-0d28-4efb-8d30-d5a05d04b61c,},Annotations:map[string]string{io.kubernetes.container.hash: 7328c257,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b,PodSandboxId:7f1e8084edcb44248ddafdd2e2ecfc747e71b1881df67aa1e868d4b3734346b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711503008272380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77gzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 8505e5c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837,PodSandboxId:3796f9fef2d869e41f233f1ce09fa13b899aec34351dba9af7dfeeec119f35a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711502151909643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pjtg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c771175-3c51-4988-8b90-58ff0e33a5f8,},Annotations:map[string]string{io.kubernetes.container.hash: ce4a43d1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba,PodSandboxId:22670270d17793ff3d376e2d98ad881063cacd2a724649785bd9a0dd923c188f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711478548158629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: df1f27d844d6669a28f6800dcf5d9773,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc,PodSandboxId:7f97ce24e0a5d8e765ecadd59dc52a3ebff5704a7d4d57d8c35cd9a380dc12d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711478161631940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442209055b3cd7cb3
c907644e1b24e12,},Annotations:map[string]string{io.kubernetes.container.hash: f7fa274a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff,PodSandboxId:eb12bb06257706a7cbf2d1ccdf84e68c056a4cda563b1f90fda5e93e7baac002,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711477885986899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5747af2482af7359fd79d651fa78982a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d,PodSandboxId:6c9a1afeb465f99437c1dc89dd3236f16b4ae59a8c5e43dccef61d5619771b68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711477858320375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 530401226fca04519b09aba8fa4e5da5,},Annotations:map[string]string{io.kubernetes.container.hash: 208c43ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5337f86-53fc-4f4b-8b02-b6aa66c5c773 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.202074135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6b1fa9aa-2cd9-4d2a-b547-62777934b232 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.202232969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6b1fa9aa-2cd9-4d2a-b547-62777934b232 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.203582177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7a7aac35-337e-4a1f-80cd-929336744896 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.204135845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712577204114564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7a7aac35-337e-4a1f-80cd-929336744896 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.204681124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=98ff0e83-bfdf-4b45-8b06-6b1c7e674ceb name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.204724762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=98ff0e83-bfdf-4b45-8b06-6b1c7e674ceb name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.204998769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11,PodSandboxId:c2da5d55b35eef79c0f1d94dca3535d4791cfd0aa646756a2aeb2fde5a160852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711503371081417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d33e4-0d28-4efb-8d30-d5a05d04b61c,},Annotations:map[string]string{io.kubernetes.container.hash: 7328c257,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b,PodSandboxId:7f1e8084edcb44248ddafdd2e2ecfc747e71b1881df67aa1e868d4b3734346b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711503008272380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77gzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 8505e5c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837,PodSandboxId:3796f9fef2d869e41f233f1ce09fa13b899aec34351dba9af7dfeeec119f35a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711502151909643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pjtg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c771175-3c51-4988-8b90-58ff0e33a5f8,},Annotations:map[string]string{io.kubernetes.container.hash: ce4a43d1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba,PodSandboxId:22670270d17793ff3d376e2d98ad881063cacd2a724649785bd9a0dd923c188f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711478548158629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: df1f27d844d6669a28f6800dcf5d9773,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc,PodSandboxId:7f97ce24e0a5d8e765ecadd59dc52a3ebff5704a7d4d57d8c35cd9a380dc12d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711478161631940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442209055b3cd7cb3
c907644e1b24e12,},Annotations:map[string]string{io.kubernetes.container.hash: f7fa274a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff,PodSandboxId:eb12bb06257706a7cbf2d1ccdf84e68c056a4cda563b1f90fda5e93e7baac002,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711477885986899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5747af2482af7359fd79d651fa78982a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d,PodSandboxId:6c9a1afeb465f99437c1dc89dd3236f16b4ae59a8c5e43dccef61d5619771b68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711477858320375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 530401226fca04519b09aba8fa4e5da5,},Annotations:map[string]string{io.kubernetes.container.hash: 208c43ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=98ff0e83-bfdf-4b45-8b06-6b1c7e674ceb name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.254500460Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=788c62d8-4ade-49e2-81c8-ed5564297895 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.254607799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=788c62d8-4ade-49e2-81c8-ed5564297895 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.256097663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=57e1cefb-c417-4ee3-b74d-ca54c7053c84 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.256550207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712577256536612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=57e1cefb-c417-4ee3-b74d-ca54c7053c84 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.257233302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=562fe5b0-43ba-426c-836f-3ff15e474d25 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.257309445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=562fe5b0-43ba-426c-836f-3ff15e474d25 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.257519309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11,PodSandboxId:c2da5d55b35eef79c0f1d94dca3535d4791cfd0aa646756a2aeb2fde5a160852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711503371081417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d33e4-0d28-4efb-8d30-d5a05d04b61c,},Annotations:map[string]string{io.kubernetes.container.hash: 7328c257,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b,PodSandboxId:7f1e8084edcb44248ddafdd2e2ecfc747e71b1881df67aa1e868d4b3734346b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711503008272380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77gzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 8505e5c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837,PodSandboxId:3796f9fef2d869e41f233f1ce09fa13b899aec34351dba9af7dfeeec119f35a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711502151909643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pjtg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c771175-3c51-4988-8b90-58ff0e33a5f8,},Annotations:map[string]string{io.kubernetes.container.hash: ce4a43d1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba,PodSandboxId:22670270d17793ff3d376e2d98ad881063cacd2a724649785bd9a0dd923c188f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711478548158629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: df1f27d844d6669a28f6800dcf5d9773,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc,PodSandboxId:7f97ce24e0a5d8e765ecadd59dc52a3ebff5704a7d4d57d8c35cd9a380dc12d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711478161631940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442209055b3cd7cb3
c907644e1b24e12,},Annotations:map[string]string{io.kubernetes.container.hash: f7fa274a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff,PodSandboxId:eb12bb06257706a7cbf2d1ccdf84e68c056a4cda563b1f90fda5e93e7baac002,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711477885986899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5747af2482af7359fd79d651fa78982a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d,PodSandboxId:6c9a1afeb465f99437c1dc89dd3236f16b4ae59a8c5e43dccef61d5619771b68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711477858320375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 530401226fca04519b09aba8fa4e5da5,},Annotations:map[string]string{io.kubernetes.container.hash: 208c43ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=562fe5b0-43ba-426c-836f-3ff15e474d25 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.297452235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a7e13ea6-fe2f-49ff-8704-c14ebaa24728 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.297569101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a7e13ea6-fe2f-49ff-8704-c14ebaa24728 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.299154670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f60287a6-f40c-4110-bc05-1f65386a955a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.299653550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712577299632876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f60287a6-f40c-4110-bc05-1f65386a955a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.300254699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6bbdfe2d-1779-4fb9-b370-641fe36c4b8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.300338467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6bbdfe2d-1779-4fb9-b370-641fe36c4b8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:36:17 default-k8s-diff-port-892233 crio[714]: time="2023-10-31 00:36:17.300496633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11,PodSandboxId:c2da5d55b35eef79c0f1d94dca3535d4791cfd0aa646756a2aeb2fde5a160852,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711503371081417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 995d33e4-0d28-4efb-8d30-d5a05d04b61c,},Annotations:map[string]string{io.kubernetes.container.hash: 7328c257,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b,PodSandboxId:7f1e8084edcb44248ddafdd2e2ecfc747e71b1881df67aa1e868d4b3734346b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698711503008272380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77gzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 8505e5c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837,PodSandboxId:3796f9fef2d869e41f233f1ce09fa13b899aec34351dba9af7dfeeec119f35a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698711502151909643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pjtg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c771175-3c51-4988-8b90-58ff0e33a5f8,},Annotations:map[string]string{io.kubernetes.container.hash: ce4a43d1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba,PodSandboxId:22670270d17793ff3d376e2d98ad881063cacd2a724649785bd9a0dd923c188f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698711478548158629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: df1f27d844d6669a28f6800dcf5d9773,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc,PodSandboxId:7f97ce24e0a5d8e765ecadd59dc52a3ebff5704a7d4d57d8c35cd9a380dc12d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698711478161631940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 442209055b3cd7cb3
c907644e1b24e12,},Annotations:map[string]string{io.kubernetes.container.hash: f7fa274a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff,PodSandboxId:eb12bb06257706a7cbf2d1ccdf84e68c056a4cda563b1f90fda5e93e7baac002,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698711477885986899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 5747af2482af7359fd79d651fa78982a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d,PodSandboxId:6c9a1afeb465f99437c1dc89dd3236f16b4ae59a8c5e43dccef61d5619771b68,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698711477858320375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-892233,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 530401226fca04519b09aba8fa4e5da5,},Annotations:map[string]string{io.kubernetes.container.hash: 208c43ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6bbdfe2d-1779-4fb9-b370-641fe36c4b8e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	813f1afbf382a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   c2da5d55b35ee       storage-provisioner
	cc2e201d615c2       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   17 minutes ago      Running             kube-proxy                0                   7f1e8084edcb4       kube-proxy-77gzz
	7aef40c6e4bfe       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   17 minutes ago      Running             coredns                   0                   3796f9fef2d86       coredns-5dd5756b68-pjtg4
	f64c01c7bd84f       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   18 minutes ago      Running             kube-scheduler            2                   22670270d1779       kube-scheduler-default-k8s-diff-port-892233
	69023a2f35d6d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago      Running             etcd                      2                   7f97ce24e0a5d       etcd-default-k8s-diff-port-892233
	5f0d1f50cf5cd       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   18 minutes ago      Running             kube-controller-manager   2                   eb12bb0625770       kube-controller-manager-default-k8s-diff-port-892233
	68cebba71341b       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   18 minutes ago      Running             kube-apiserver            2                   6c9a1afeb465f       kube-apiserver-default-k8s-diff-port-892233
	
	* 
	* ==> coredns [7aef40c6e4bfe3124f3ef087905b08918d48b996b714839dee7dccf2c015e837] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:60598 - 57135 "HINFO IN 3441151810271889532.4856826152383992695. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010174265s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-892233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-892233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=default-k8s-diff-port-892233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T00_18_06_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 00:18:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-892233
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 00:36:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:33:45 +0000   Tue, 31 Oct 2023 00:17:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:33:45 +0000   Tue, 31 Oct 2023 00:17:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:33:45 +0000   Tue, 31 Oct 2023 00:17:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:33:45 +0000   Tue, 31 Oct 2023 00:18:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    default-k8s-diff-port-892233
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c0f68a3c36a4e5da9f1472b2df10596
	  System UUID:                5c0f68a3-c36a-4e5d-a9f1-472b2df10596
	  Boot ID:                    45d6a9e1-a1f1-47d9-a4b7-7aae0c4f98c9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-pjtg4                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-default-k8s-diff-port-892233                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-892233             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-892233    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-77gzz                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-892233             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-57f55c9bc5-8pc87                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node default-k8s-diff-port-892233 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             18m                kubelet          Node default-k8s-diff-port-892233 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m                kubelet          Node default-k8s-diff-port-892233 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-892233 event: Registered Node default-k8s-diff-port-892233 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct31 00:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069357] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.549182] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.557881] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156785] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.572054] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct31 00:13] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.133449] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.171002] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.129828] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.239130] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +18.355885] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[ +19.559307] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 00:17] systemd-fstab-generator[3543]: Ignoring "noauto" for root device
	[Oct31 00:18] systemd-fstab-generator[3871]: Ignoring "noauto" for root device
	[ +13.447169] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.318558] kauditd_printk_skb: 7 callbacks suppressed
	[Oct31 00:33] hrtimer: interrupt took 11287350 ns
	
	* 
	* ==> etcd [69023a2f35d6d12adc74adb3adad2f52a48bd524a1f912655d92ba31e9a24bdc] <==
	* {"level":"info","ts":"2023-10-31T00:18:00.574158Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:18:00.581222Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T00:18:00.581346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T00:28:00.963058Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":730}
	{"level":"info","ts":"2023-10-31T00:28:00.965621Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":730,"took":"2.194127ms","hash":1848859383}
	{"level":"info","ts":"2023-10-31T00:28:00.965702Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1848859383,"revision":730,"compact-revision":-1}
	{"level":"info","ts":"2023-10-31T00:33:00.971391Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2023-10-31T00:33:00.974933Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":973,"took":"1.790664ms","hash":1653039213}
	{"level":"info","ts":"2023-10-31T00:33:00.97502Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1653039213,"revision":973,"compact-revision":730}
	{"level":"info","ts":"2023-10-31T00:33:01.526884Z","caller":"traceutil/trace.go:171","msg":"trace[1837216698] linearizableReadLoop","detail":"{readStateIndex:1411; appliedIndex:1410; }","duration":"331.174452ms","start":"2023-10-31T00:33:01.195613Z","end":"2023-10-31T00:33:01.526787Z","steps":["trace[1837216698] 'read index received'  (duration: 331.039265ms)","trace[1837216698] 'applied index is now lower than readState.Index'  (duration: 134.662µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-31T00:33:01.527206Z","caller":"traceutil/trace.go:171","msg":"trace[887808252] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"357.004528ms","start":"2023-10-31T00:33:01.170188Z","end":"2023-10-31T00:33:01.527193Z","steps":["trace[887808252] 'process raft request'  (duration: 356.449623ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:33:01.528208Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:33:01.170167Z","time spent":"357.09987ms","remote":"127.0.0.1:50508","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1215 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-10-31T00:33:01.528354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.629728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-31T00:33:01.52848Z","caller":"traceutil/trace.go:171","msg":"trace[618185284] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1218; }","duration":"191.760339ms","start":"2023-10-31T00:33:01.336704Z","end":"2023-10-31T00:33:01.528464Z","steps":["trace[618185284] 'agreement among raft nodes before linearized reading'  (duration: 191.600443ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:33:01.528754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.157872ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-31T00:33:01.528777Z","caller":"traceutil/trace.go:171","msg":"trace[2135360268] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1218; }","duration":"333.183708ms","start":"2023-10-31T00:33:01.195586Z","end":"2023-10-31T00:33:01.528769Z","steps":["trace[2135360268] 'agreement among raft nodes before linearized reading'  (duration: 333.135862ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:33:01.528881Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:33:01.195573Z","time spent":"333.298905ms","remote":"127.0.0.1:50472","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-10-31T00:33:52.208879Z","caller":"traceutil/trace.go:171","msg":"trace[501981909] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"300.661763ms","start":"2023-10-31T00:33:51.908198Z","end":"2023-10-31T00:33:52.20886Z","steps":["trace[501981909] 'process raft request'  (duration: 300.480213ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:33:52.209134Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-31T00:33:51.908182Z","time spent":"300.844478ms","remote":"127.0.0.1:50508","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1258 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-10-31T00:34:37.372593Z","caller":"traceutil/trace.go:171","msg":"trace[1251673676] transaction","detail":"{read_only:false; response_revision:1297; number_of_response:1; }","duration":"204.201486ms","start":"2023-10-31T00:34:37.168359Z","end":"2023-10-31T00:34:37.37256Z","steps":["trace[1251673676] 'process raft request'  (duration: 203.715527ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:35:27.136037Z","caller":"traceutil/trace.go:171","msg":"trace[1314483130] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"129.97221ms","start":"2023-10-31T00:35:27.006024Z","end":"2023-10-31T00:35:27.135997Z","steps":["trace[1314483130] 'process raft request'  (duration: 129.670366ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T00:35:31.327121Z","caller":"traceutil/trace.go:171","msg":"trace[285831183] linearizableReadLoop","detail":"{readStateIndex:1564; appliedIndex:1563; }","duration":"125.377676ms","start":"2023-10-31T00:35:31.201716Z","end":"2023-10-31T00:35:31.327094Z","steps":["trace[285831183] 'read index received'  (duration: 125.043518ms)","trace[285831183] 'applied index is now lower than readState.Index'  (duration: 333.126µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-31T00:35:31.327345Z","caller":"traceutil/trace.go:171","msg":"trace[445599720] transaction","detail":"{read_only:false; response_revision:1340; number_of_response:1; }","duration":"167.687184ms","start":"2023-10-31T00:35:31.159636Z","end":"2023-10-31T00:35:31.327323Z","steps":["trace[445599720] 'process raft request'  (duration: 167.292568ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-31T00:35:31.327539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.757647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-31T00:35:31.327968Z","caller":"traceutil/trace.go:171","msg":"trace[158834429] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1340; }","duration":"126.266079ms","start":"2023-10-31T00:35:31.201685Z","end":"2023-10-31T00:35:31.327951Z","steps":["trace[158834429] 'agreement among raft nodes before linearized reading'  (duration: 125.675755ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  00:36:17 up 23 min,  0 users,  load average: 0.33, 0.30, 0.27
	Linux default-k8s-diff-port-892233 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [68cebba71341b6b090d38cbaff10ff4cfbbdc381e95d94639ec7589dbcda0b5d] <==
	* I1031 00:33:02.545192       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:33:03.545016       1 handler_proxy.go:93] no RequestInfo found in the context
	W1031 00:33:03.545115       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:33:03.545227       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:33:03.545271       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1031 00:33:03.545121       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:33:03.546361       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:34:02.415547       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:34:03.546208       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:34:03.546303       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:34:03.546317       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:34:03.547352       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:34:03.547415       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:34:03.547429       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:35:02.415456       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1031 00:36:02.415989       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:36:03.547345       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:36:03.547580       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:36:03.547647       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:36:03.547590       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:36:03.547715       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:36:03.548773       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [5f0d1f50cf5cd5dc1a87581ee5317a31c21d00d219996334b0a2f3cbee1e70ff] <==
	* I1031 00:30:19.799751       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:30:49.171769       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:30:49.809017       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:31:19.179210       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:31:19.820109       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:31:49.187622       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:31:49.830521       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:32:19.193765       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:32:19.841044       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:32:49.202635       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:32:49.864057       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:33:19.208756       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:33:19.878514       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:33:49.217310       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:33:49.887758       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:34:19.224709       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:34:19.903726       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:34:25.333086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="913.842µs"
	I1031 00:34:39.334742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="455.781µs"
	E1031 00:34:49.232418       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:34:49.916373       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:35:19.240854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:35:19.926917       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:35:49.246976       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:35:49.938988       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [cc2e201d615c23cdc675ddea668efcfe0894fcdd1d859ee087f211067711e58b] <==
	* I1031 00:18:23.438712       1 server_others.go:69] "Using iptables proxy"
	I1031 00:18:23.472984       1 node.go:141] Successfully retrieved node IP: 192.168.39.2
	I1031 00:18:23.639673       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 00:18:23.639740       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 00:18:23.645001       1 server_others.go:152] "Using iptables Proxier"
	I1031 00:18:23.646139       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 00:18:23.646421       1 server.go:846] "Version info" version="v1.28.3"
	I1031 00:18:23.646431       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:18:23.654478       1 config.go:188] "Starting service config controller"
	I1031 00:18:23.655425       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 00:18:23.655674       1 config.go:97] "Starting endpoint slice config controller"
	I1031 00:18:23.656896       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 00:18:23.659044       1 config.go:315] "Starting node config controller"
	I1031 00:18:23.659089       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 00:18:23.756090       1 shared_informer.go:318] Caches are synced for service config
	I1031 00:18:23.757385       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 00:18:23.759234       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f64c01c7bd84f2382ba68e42d6ab3fe5c5bad706ae48085926125b1c3aa23dba] <==
	* W1031 00:18:02.578597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:18:02.578605       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1031 00:18:02.578897       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 00:18:02.579105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1031 00:18:03.471528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:03.471636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 00:18:03.530606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:18:03.531951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1031 00:18:03.558710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:03.558765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1031 00:18:03.592304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 00:18:03.592401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 00:18:03.685191       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 00:18:03.685289       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 00:18:03.702086       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 00:18:03.702235       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 00:18:03.734648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:03.734894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1031 00:18:03.815647       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 00:18:03.815745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 00:18:03.853349       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 00:18:03.853444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1031 00:18:03.872321       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 00:18:03.872415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1031 00:18:06.557592       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 00:12:49 UTC, ends at Tue 2023-10-31 00:36:17 UTC. --
	Oct 31 00:34:06 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:34:06.404188    3878 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:34:06 default-k8s-diff-port-892233 kubelet[3878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:34:06 default-k8s-diff-port-892233 kubelet[3878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:34:06 default-k8s-diff-port-892233 kubelet[3878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:34:13 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:34:13.328888    3878 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 31 00:34:13 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:34:13.328965    3878 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 31 00:34:13 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:34:13.329275    3878 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tjrdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-8pc87_kube-system(c91683ff-11bf-4530-90c3-91f4b28e2dab): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:34:13 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:34:13.329426    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:34:25 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:34:25.313926    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:34:39 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:34:39.315213    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:34:54 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:34:54.317660    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:35:06 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:35:06.313603    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:35:06 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:35:06.403304    3878 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:35:06 default-k8s-diff-port-892233 kubelet[3878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:35:06 default-k8s-diff-port-892233 kubelet[3878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:35:06 default-k8s-diff-port-892233 kubelet[3878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:35:17 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:35:17.314930    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:35:28 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:35:28.315421    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:35:42 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:35:42.315097    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:35:53 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:35:53.314529    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:36:04 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:36:04.314614    3878 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-8pc87" podUID="c91683ff-11bf-4530-90c3-91f4b28e2dab"
	Oct 31 00:36:06 default-k8s-diff-port-892233 kubelet[3878]: E1031 00:36:06.404956    3878 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:36:06 default-k8s-diff-port-892233 kubelet[3878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:36:06 default-k8s-diff-port-892233 kubelet[3878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:36:06 default-k8s-diff-port-892233 kubelet[3878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [813f1afbf382aa04ee9ab12f144c6eb3976b64bac30b57e03c324ac08fd4ea11] <==
	* I1031 00:18:23.679879       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 00:18:23.691765       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 00:18:23.692068       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 00:18:23.701184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 00:18:23.701946       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-892233_0ec05bdf-f9e5-4157-abaa-89a25bfea216!
	I1031 00:18:23.706506       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c9cff2d-4c51-447b-9111-12ba65c70537", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-892233_0ec05bdf-f9e5-4157-abaa-89a25bfea216 became leader
	I1031 00:18:23.803725       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-892233_0ec05bdf-f9e5-4157-abaa-89a25bfea216!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-892233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-8pc87
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-892233 describe pod metrics-server-57f55c9bc5-8pc87
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-892233 describe pod metrics-server-57f55c9bc5-8pc87: exit status 1 (74.136891ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-8pc87" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-892233 describe pod metrics-server-57f55c9bc5-8pc87: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (529.16s)
E1031 00:37:08.185566  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1031 00:37:16.794236  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:16.799579  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:16.809903  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:16.830272  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:16.870584  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:16.950962  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:17.112016  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:17.432452  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:18.073627  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:18.776487  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:37:19.353855  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:21.914134  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:27.034972  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
E1031 00:37:33.681034  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (148.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-225140 -n old-k8s-version-225140
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-31 00:31:56.630004077 +0000 UTC m=+5418.880020994
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-225140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-225140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.225µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-225140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-225140 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-225140 logs -n 25: (1.637410339s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-options-344463                                 | cert-options-344463          | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:02 UTC |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:02 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| start   | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-225140        | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-640155             | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:06 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-078843            | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221554 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | disable-driver-mounts-221554                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:07 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-225140             | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:20 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-892233  | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-640155                  | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:22 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-078843                 | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC | 31 Oct 23 00:17 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-892233       | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC | 31 Oct 23 00:18 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:09:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:09:59.171110  249055 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:09:59.171372  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171383  249055 out.go:309] Setting ErrFile to fd 2...
	I1031 00:09:59.171387  249055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:09:59.171591  249055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:09:59.172151  249055 out.go:303] Setting JSON to false
	I1031 00:09:59.173091  249055 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28351,"bootTime":1698682648,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:09:59.173154  249055 start.go:138] virtualization: kvm guest
	I1031 00:09:59.175712  249055 out.go:177] * [default-k8s-diff-port-892233] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:09:59.177218  249055 notify.go:220] Checking for updates...
	I1031 00:09:59.177238  249055 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:09:59.178590  249055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:09:59.179936  249055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:09:59.181243  249055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:09:59.182619  249055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:09:59.184021  249055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:09:59.185755  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:09:59.186187  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.186242  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.200537  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I1031 00:09:59.201002  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.201576  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.201596  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.201949  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.202159  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.202362  249055 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:09:59.202635  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:09:59.202680  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:09:59.216197  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I1031 00:09:59.216575  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:09:59.216998  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:09:59.217027  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:09:59.217349  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:09:59.217537  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:09:59.250565  249055 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 00:09:59.251974  249055 start.go:298] selected driver: kvm2
	I1031 00:09:59.251988  249055 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.252123  249055 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:09:59.253132  249055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.253220  249055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:09:59.266948  249055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:09:59.267297  249055 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 00:09:59.267362  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:09:59.267383  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:09:59.267401  249055 start_flags.go:323] config:
	{Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-89223
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:09:59.267557  249055 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:09:59.269225  249055 out.go:177] * Starting control plane node default-k8s-diff-port-892233 in cluster default-k8s-diff-port-892233
	I1031 00:09:57.481224  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:00.553221  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:09:59.270407  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:09:59.270449  249055 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:09:59.270460  249055 cache.go:56] Caching tarball of preloaded images
	I1031 00:09:59.270553  249055 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:09:59.270569  249055 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:09:59.270702  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:09:59.270937  249055 start.go:365] acquiring machines lock for default-k8s-diff-port-892233: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:10:06.633217  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:09.705265  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:15.785240  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:18.857227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:24.937215  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:28.009292  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:34.089205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:37.161208  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:43.241288  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:46.313160  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:52.393273  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:10:55.465205  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:01.545192  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:04.617227  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:10.697233  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:13.769258  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:19.849250  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:22.921270  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:29.001178  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:32.073257  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:38.153271  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:41.225244  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:47.305235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:50.377235  248084 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.65:22: connect: no route to host
	I1031 00:11:53.381665  248387 start.go:369] acquired machines lock for "no-preload-640155" in 4m7.945210729s
	I1031 00:11:53.381722  248387 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:11:53.381734  248387 fix.go:54] fixHost starting: 
	I1031 00:11:53.382372  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:11:53.382418  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:11:53.397155  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I1031 00:11:53.397704  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:11:53.398181  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:11:53.398206  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:11:53.398561  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:11:53.398761  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:11:53.398909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:11:53.400611  248387 fix.go:102] recreateIfNeeded on no-preload-640155: state=Stopped err=<nil>
	I1031 00:11:53.400634  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	W1031 00:11:53.400782  248387 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:11:53.402394  248387 out.go:177] * Restarting existing kvm2 VM for "no-preload-640155" ...
	I1031 00:11:53.403767  248387 main.go:141] libmachine: (no-preload-640155) Calling .Start
	I1031 00:11:53.403944  248387 main.go:141] libmachine: (no-preload-640155) Ensuring networks are active...
	I1031 00:11:53.404678  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network default is active
	I1031 00:11:53.405127  248387 main.go:141] libmachine: (no-preload-640155) Ensuring network mk-no-preload-640155 is active
	I1031 00:11:53.405642  248387 main.go:141] libmachine: (no-preload-640155) Getting domain xml...
	I1031 00:11:53.406300  248387 main.go:141] libmachine: (no-preload-640155) Creating domain...
	I1031 00:11:54.646418  248387 main.go:141] libmachine: (no-preload-640155) Waiting to get IP...
	I1031 00:11:54.647560  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.647956  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.648034  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.647947  249366 retry.go:31] will retry after 237.521879ms: waiting for machine to come up
	I1031 00:11:54.887446  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:54.887861  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:54.887895  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:54.887804  249366 retry.go:31] will retry after 320.996838ms: waiting for machine to come up
	I1031 00:11:53.379251  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:11:53.379302  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:11:53.381458  248084 machine.go:91] provisioned docker machine in 4m37.397131013s
	I1031 00:11:53.381513  248084 fix.go:56] fixHost completed within 4m37.420319931s
	I1031 00:11:53.381528  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 4m37.420354195s
	W1031 00:11:53.381569  248084 start.go:691] error starting host: provision: host is not running
	W1031 00:11:53.381676  248084 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1031 00:11:53.381687  248084 start.go:706] Will try again in 5 seconds ...
	I1031 00:11:55.210309  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.210784  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.210818  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.210728  249366 retry.go:31] will retry after 412.198071ms: waiting for machine to come up
	I1031 00:11:55.624299  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:55.624689  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:55.624721  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:55.624647  249366 retry.go:31] will retry after 596.339141ms: waiting for machine to come up
	I1031 00:11:56.222381  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.222918  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.222952  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.222864  249366 retry.go:31] will retry after 640.775314ms: waiting for machine to come up
	I1031 00:11:56.865881  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:56.866355  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:56.866394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:56.866321  249366 retry.go:31] will retry after 797.697217ms: waiting for machine to come up
	I1031 00:11:57.665413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:57.665930  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:57.665971  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:57.665871  249366 retry.go:31] will retry after 808.934364ms: waiting for machine to come up
	I1031 00:11:58.476161  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:58.476620  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:58.476651  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:58.476582  249366 retry.go:31] will retry after 1.198576442s: waiting for machine to come up
	I1031 00:11:59.676957  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:11:59.677540  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:11:59.677575  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:11:59.677462  249366 retry.go:31] will retry after 1.122967081s: waiting for machine to come up
	I1031 00:11:58.383586  248084 start.go:365] acquiring machines lock for old-k8s-version-225140: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:12:00.801790  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:00.802278  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:00.802313  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:00.802216  249366 retry.go:31] will retry after 2.182263229s: waiting for machine to come up
	I1031 00:12:02.987870  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:02.988307  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:02.988339  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:02.988235  249366 retry.go:31] will retry after 2.73312352s: waiting for machine to come up
	I1031 00:12:05.723196  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:05.723664  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:05.723695  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:05.723595  249366 retry.go:31] will retry after 2.33306923s: waiting for machine to come up
	I1031 00:12:08.060086  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:08.060364  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:08.060394  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:08.060328  249366 retry.go:31] will retry after 2.770780436s: waiting for machine to come up
	I1031 00:12:10.834601  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:10.834995  248387 main.go:141] libmachine: (no-preload-640155) DBG | unable to find current IP address of domain no-preload-640155 in network mk-no-preload-640155
	I1031 00:12:10.835020  248387 main.go:141] libmachine: (no-preload-640155) DBG | I1031 00:12:10.834939  249366 retry.go:31] will retry after 4.389090657s: waiting for machine to come up
	I1031 00:12:16.389786  248718 start.go:369] acquired machines lock for "embed-certs-078843" in 3m38.778041195s
	I1031 00:12:16.389855  248718 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:16.389864  248718 fix.go:54] fixHost starting: 
	I1031 00:12:16.390317  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:16.390362  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:16.407875  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I1031 00:12:16.408273  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:16.408842  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:12:16.408870  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:16.409226  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:16.409404  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:16.409574  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:12:16.410975  248718 fix.go:102] recreateIfNeeded on embed-certs-078843: state=Stopped err=<nil>
	I1031 00:12:16.411013  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	W1031 00:12:16.411196  248718 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:16.413529  248718 out.go:177] * Restarting existing kvm2 VM for "embed-certs-078843" ...
	I1031 00:12:16.414858  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Start
	I1031 00:12:16.415041  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring networks are active...
	I1031 00:12:16.415738  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network default is active
	I1031 00:12:16.416116  248718 main.go:141] libmachine: (embed-certs-078843) Ensuring network mk-embed-certs-078843 is active
	I1031 00:12:16.416450  248718 main.go:141] libmachine: (embed-certs-078843) Getting domain xml...
	I1031 00:12:16.417190  248718 main.go:141] libmachine: (embed-certs-078843) Creating domain...
	I1031 00:12:15.226912  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227453  248387 main.go:141] libmachine: (no-preload-640155) Found IP for machine: 192.168.61.168
	I1031 00:12:15.227473  248387 main.go:141] libmachine: (no-preload-640155) Reserving static IP address...
	I1031 00:12:15.227513  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has current primary IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.227861  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.227890  248387 main.go:141] libmachine: (no-preload-640155) DBG | skip adding static IP to network mk-no-preload-640155 - found existing host DHCP lease matching {name: "no-preload-640155", mac: "52:54:00:bd:a4:c2", ip: "192.168.61.168"}
	I1031 00:12:15.227900  248387 main.go:141] libmachine: (no-preload-640155) Reserved static IP address: 192.168.61.168
	I1031 00:12:15.227919  248387 main.go:141] libmachine: (no-preload-640155) Waiting for SSH to be available...
	I1031 00:12:15.227938  248387 main.go:141] libmachine: (no-preload-640155) DBG | Getting to WaitForSSH function...
	I1031 00:12:15.230076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230450  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.230556  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.230578  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH client type: external
	I1031 00:12:15.230601  248387 main.go:141] libmachine: (no-preload-640155) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa (-rw-------)
	I1031 00:12:15.230646  248387 main.go:141] libmachine: (no-preload-640155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:15.230666  248387 main.go:141] libmachine: (no-preload-640155) DBG | About to run SSH command:
	I1031 00:12:15.230678  248387 main.go:141] libmachine: (no-preload-640155) DBG | exit 0
	I1031 00:12:15.316515  248387 main.go:141] libmachine: (no-preload-640155) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:15.316855  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetConfigRaw
	I1031 00:12:15.317658  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.320306  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.320647  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.320679  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.321008  248387 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/config.json ...
	I1031 00:12:15.321252  248387 machine.go:88] provisioning docker machine ...
	I1031 00:12:15.321275  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:15.321492  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321669  248387 buildroot.go:166] provisioning hostname "no-preload-640155"
	I1031 00:12:15.321691  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.321858  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.324151  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324480  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.324518  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.324657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.324849  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325057  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.325237  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.325416  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.325795  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.325815  248387 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-640155 && echo "no-preload-640155" | sudo tee /etc/hostname
	I1031 00:12:15.450048  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-640155
	
	I1031 00:12:15.450079  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.452951  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453298  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.453344  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.453430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.453657  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453800  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.453899  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.454055  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.454540  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.454569  248387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-640155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-640155/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-640155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:15.574041  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:15.574072  248387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:15.574104  248387 buildroot.go:174] setting up certificates
	I1031 00:12:15.574116  248387 provision.go:83] configureAuth start
	I1031 00:12:15.574125  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetMachineName
	I1031 00:12:15.574451  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:15.577558  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578020  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.578059  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.578197  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.580453  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.580832  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.580876  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.581078  248387 provision.go:138] copyHostCerts
	I1031 00:12:15.581171  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:15.581184  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:15.581256  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:15.581407  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:15.581420  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:15.581453  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:15.581522  248387 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:15.581530  248387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:15.581560  248387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:15.581611  248387 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.no-preload-640155 san=[192.168.61.168 192.168.61.168 localhost 127.0.0.1 minikube no-preload-640155]
	I1031 00:12:15.693832  248387 provision.go:172] copyRemoteCerts
	I1031 00:12:15.693906  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:15.693934  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.696811  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697210  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.697258  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.697471  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.697683  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.697870  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.698054  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:15.781207  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:15.803665  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:15.826369  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:12:15.849259  248387 provision.go:86] duration metric: configureAuth took 275.127597ms
	I1031 00:12:15.849292  248387 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:15.849476  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:15.849565  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:15.852413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.852804  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:15.852848  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:15.853027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:15.853227  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853440  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:15.853549  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:15.853724  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:15.854104  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:15.854132  248387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:16.147033  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:16.147078  248387 machine.go:91] provisioned docker machine in 825.808812ms
	I1031 00:12:16.147094  248387 start.go:300] post-start starting for "no-preload-640155" (driver="kvm2")
	I1031 00:12:16.147110  248387 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:16.147138  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.147515  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:16.147545  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.150321  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150755  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.150798  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.150909  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.151155  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.151335  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.151493  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.237897  248387 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:16.242343  248387 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:16.242367  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:16.242440  248387 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:16.242526  248387 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:16.242636  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:16.250454  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:16.273390  248387 start.go:303] post-start completed in 126.280341ms
	I1031 00:12:16.273411  248387 fix.go:56] fixHost completed within 22.891678533s
	I1031 00:12:16.273433  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.276291  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276598  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.276630  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.276761  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.276989  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277270  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.277434  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.277621  248387 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:16.277984  248387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.61.168 22 <nil> <nil>}
	I1031 00:12:16.277998  248387 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:16.389581  248387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711136.336935137
	
	I1031 00:12:16.389607  248387 fix.go:206] guest clock: 1698711136.336935137
	I1031 00:12:16.389621  248387 fix.go:219] Guest: 2023-10-31 00:12:16.336935137 +0000 UTC Remote: 2023-10-31 00:12:16.273414732 +0000 UTC m=+271.294357841 (delta=63.520405ms)
	I1031 00:12:16.389652  248387 fix.go:190] guest clock delta is within tolerance: 63.520405ms
	I1031 00:12:16.389659  248387 start.go:83] releasing machines lock for "no-preload-640155", held for 23.007957251s
	I1031 00:12:16.389694  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.390027  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:16.392988  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393466  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.393493  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.393639  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394137  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394306  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:12:16.394401  248387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:16.394449  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.394583  248387 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:16.394619  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:12:16.397387  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397690  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397757  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.397785  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.397927  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398140  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398174  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:16.398206  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:16.398296  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398430  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:12:16.398503  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.398616  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:12:16.398784  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:12:16.398936  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:12:16.520353  248387 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:16.526647  248387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:16.673048  248387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:16.679657  248387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:16.679738  248387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:16.699616  248387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:16.699643  248387 start.go:472] detecting cgroup driver to use...
	I1031 00:12:16.699706  248387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:16.717466  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:16.729231  248387 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:16.729300  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:16.741665  248387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:16.754175  248387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:16.855649  248387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:16.990153  248387 docker.go:214] disabling docker service ...
	I1031 00:12:16.990239  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:17.004614  248387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:17.017251  248387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:17.143006  248387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:17.257321  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:17.271045  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:17.288903  248387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:17.289001  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.298419  248387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:17.298516  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.308045  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.317176  248387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:17.327039  248387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:17.337269  248387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:17.345814  248387 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:17.345886  248387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:17.359110  248387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:17.369376  248387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:17.480359  248387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:17.658006  248387 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:17.658099  248387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:17.663296  248387 start.go:540] Will wait 60s for crictl version
	I1031 00:12:17.663467  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:17.667483  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:17.709866  248387 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:17.709956  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.757817  248387 ssh_runner.go:195] Run: crio --version
	I1031 00:12:17.812918  248387 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:17.814541  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetIP
	I1031 00:12:17.818008  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818445  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:12:17.818482  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:12:17.818745  248387 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:17.822914  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:17.837885  248387 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:17.837941  248387 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:17.874977  248387 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:17.875010  248387 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:12:17.875097  248387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.875104  248387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.875130  248387 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.875163  248387 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1031 00:12:17.875181  248387 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.875233  248387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.875297  248387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.875306  248387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876689  248387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:17.876731  248387 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:17.876696  248387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:17.876697  248387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:17.876695  248387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:17.876704  248387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:17.876842  248387 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1031 00:12:18.053090  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.059240  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1031 00:12:18.059239  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.065016  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.069953  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.071229  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.140026  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.149728  248387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1031 00:12:18.149778  248387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.149835  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.172611  248387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.238794  248387 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1031 00:12:18.238851  248387 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.238913  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331173  248387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1031 00:12:18.331228  248387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.331279  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331278  248387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1031 00:12:18.331370  248387 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1031 00:12:18.331380  248387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.331401  248387 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.331425  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331441  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331463  248387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1031 00:12:18.331503  248387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.331542  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 00:12:18.331584  248387 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1031 00:12:18.331632  248387 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.331665  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331545  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:12:18.331591  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1031 00:12:18.348470  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1031 00:12:18.348506  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1031 00:12:18.348570  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1031 00:12:18.348619  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:12:18.484280  248387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1031 00:12:18.484369  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1031 00:12:18.484436  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1031 00:12:18.484501  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:18.484532  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.513117  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1031 00:12:18.513211  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1031 00:12:18.513238  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:18.513264  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1031 00:12:18.513307  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:18.513347  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:18.513392  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1031 00:12:18.513515  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:18.541278  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1031 00:12:18.541307  248387 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541340  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1031 00:12:18.541348  248387 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1031 00:12:18.541370  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1031 00:12:18.541416  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1031 00:12:18.541466  248387 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:18.541493  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1031 00:12:18.541547  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1031 00:12:18.541549  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1031 00:12:17.727796  248718 main.go:141] libmachine: (embed-certs-078843) Waiting to get IP...
	I1031 00:12:17.728716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:17.729132  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:17.729165  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:17.729087  249483 retry.go:31] will retry after 294.663443ms: waiting for machine to come up
	I1031 00:12:18.025671  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.026112  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.026145  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.026058  249483 retry.go:31] will retry after 377.887631ms: waiting for machine to come up
	I1031 00:12:18.405434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.405878  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.405961  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.405857  249483 retry.go:31] will retry after 459.989463ms: waiting for machine to come up
	I1031 00:12:18.867094  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:18.867658  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:18.867693  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:18.867590  249483 retry.go:31] will retry after 552.876869ms: waiting for machine to come up
	I1031 00:12:19.422232  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.422678  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.422711  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.422642  249483 retry.go:31] will retry after 574.514705ms: waiting for machine to come up
	I1031 00:12:19.998587  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:19.999158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:19.999195  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:19.999071  249483 retry.go:31] will retry after 903.246228ms: waiting for machine to come up
	I1031 00:12:20.904654  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:20.905083  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:20.905118  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:20.905028  249483 retry.go:31] will retry after 1.161301577s: waiting for machine to come up
	I1031 00:12:22.067416  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:22.067874  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:22.067906  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:22.067843  249483 retry.go:31] will retry after 1.350619049s: waiting for machine to come up
	I1031 00:12:23.419771  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:23.420313  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:23.420343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:23.420276  249483 retry.go:31] will retry after 1.783701579s: waiting for machine to come up
	I1031 00:12:25.206301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:25.206880  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:25.206909  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:25.206820  249483 retry.go:31] will retry after 2.304762715s: waiting for machine to come up
	I1031 00:12:25.834889  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.293473845s)
	I1031 00:12:25.834930  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1031 00:12:25.834949  248387 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (7.293455157s)
	I1031 00:12:25.834967  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:25.834986  248387 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1031 00:12:25.835039  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1031 00:12:28.718454  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.883305744s)
	I1031 00:12:28.718498  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1031 00:12:28.718536  248387 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:28.718602  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1031 00:12:27.513250  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:27.513691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:27.513726  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:27.513617  249483 retry.go:31] will retry after 2.77005827s: waiting for machine to come up
	I1031 00:12:30.287716  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:30.288125  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:30.288181  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:30.288095  249483 retry.go:31] will retry after 2.359494113s: waiting for machine to come up
	I1031 00:12:30.082206  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.363538098s)
	I1031 00:12:30.082241  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1031 00:12:30.082284  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:30.082378  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1031 00:12:32.754830  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.672412397s)
	I1031 00:12:32.754865  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1031 00:12:32.754922  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:32.755008  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1031 00:12:34.104402  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.3493522s)
	I1031 00:12:34.104443  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1031 00:12:34.104484  248387 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:34.104528  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1031 00:12:36.966451  249055 start.go:369] acquired machines lock for "default-k8s-diff-port-892233" in 2m37.695455763s
	I1031 00:12:36.966568  249055 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:12:36.966579  249055 fix.go:54] fixHost starting: 
	I1031 00:12:36.966927  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:12:36.966965  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:12:36.985392  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I1031 00:12:36.985889  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:12:36.986473  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:12:36.986501  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:12:36.986870  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:12:36.987100  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:36.987295  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:12:36.989416  249055 fix.go:102] recreateIfNeeded on default-k8s-diff-port-892233: state=Stopped err=<nil>
	I1031 00:12:36.989470  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	W1031 00:12:36.989641  249055 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:12:36.991746  249055 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-892233" ...
	I1031 00:12:32.648970  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:32.649516  248718 main.go:141] libmachine: (embed-certs-078843) DBG | unable to find current IP address of domain embed-certs-078843 in network mk-embed-certs-078843
	I1031 00:12:32.649563  248718 main.go:141] libmachine: (embed-certs-078843) DBG | I1031 00:12:32.649477  249483 retry.go:31] will retry after 2.827972253s: waiting for machine to come up
	I1031 00:12:35.479127  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479655  248718 main.go:141] libmachine: (embed-certs-078843) Found IP for machine: 192.168.50.2
	I1031 00:12:35.479691  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has current primary IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.479703  248718 main.go:141] libmachine: (embed-certs-078843) Reserving static IP address...
	I1031 00:12:35.480200  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.480259  248718 main.go:141] libmachine: (embed-certs-078843) DBG | skip adding static IP to network mk-embed-certs-078843 - found existing host DHCP lease matching {name: "embed-certs-078843", mac: "52:54:00:f5:a8:73", ip: "192.168.50.2"}
	I1031 00:12:35.480299  248718 main.go:141] libmachine: (embed-certs-078843) Reserved static IP address: 192.168.50.2
	I1031 00:12:35.480319  248718 main.go:141] libmachine: (embed-certs-078843) Waiting for SSH to be available...
	I1031 00:12:35.480334  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Getting to WaitForSSH function...
	I1031 00:12:35.482640  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483140  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.483177  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.483343  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH client type: external
	I1031 00:12:35.483373  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa (-rw-------)
	I1031 00:12:35.483409  248718 main.go:141] libmachine: (embed-certs-078843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:35.483434  248718 main.go:141] libmachine: (embed-certs-078843) DBG | About to run SSH command:
	I1031 00:12:35.483453  248718 main.go:141] libmachine: (embed-certs-078843) DBG | exit 0
	I1031 00:12:35.573283  248718 main.go:141] libmachine: (embed-certs-078843) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:35.573731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetConfigRaw
	I1031 00:12:35.574538  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.577369  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.577820  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.577856  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.578175  248718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/config.json ...
	I1031 00:12:35.578461  248718 machine.go:88] provisioning docker machine ...
	I1031 00:12:35.578486  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:35.578719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.578919  248718 buildroot.go:166] provisioning hostname "embed-certs-078843"
	I1031 00:12:35.578946  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.579137  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.581632  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582041  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.582075  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.582185  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.582376  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582556  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.582694  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.582864  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.583247  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.583268  248718 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-078843 && echo "embed-certs-078843" | sudo tee /etc/hostname
	I1031 00:12:35.717684  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-078843
	
	I1031 00:12:35.717719  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.720882  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721264  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.721299  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.721514  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:35.721732  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.721908  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:35.722057  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:35.722318  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:35.722757  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:35.722777  248718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-078843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-078843/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-078843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:35.865568  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:35.865626  248718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:35.865667  248718 buildroot.go:174] setting up certificates
	I1031 00:12:35.865682  248718 provision.go:83] configureAuth start
	I1031 00:12:35.865696  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetMachineName
	I1031 00:12:35.866070  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:35.869149  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869571  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.869610  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.869731  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:35.872260  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872618  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:35.872665  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:35.872855  248718 provision.go:138] copyHostCerts
	I1031 00:12:35.872978  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:35.873000  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:35.873069  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:35.873192  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:35.873203  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:35.873234  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:35.873316  248718 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:35.873327  248718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:35.873352  248718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:35.873426  248718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.embed-certs-078843 san=[192.168.50.2 192.168.50.2 localhost 127.0.0.1 minikube embed-certs-078843]
	I1031 00:12:36.016430  248718 provision.go:172] copyRemoteCerts
	I1031 00:12:36.016506  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:12:36.016553  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.019662  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020054  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.020088  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.020286  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.020505  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.020658  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.020843  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.111793  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:12:36.140569  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 00:12:36.179708  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:12:36.203348  248718 provision.go:86] duration metric: configureAuth took 337.646698ms
	I1031 00:12:36.203385  248718 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:12:36.203690  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:12:36.203835  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.207444  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.207883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.207923  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.208236  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.208498  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208690  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.208912  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.209163  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.209521  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.209547  248718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:12:36.711502  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:12:36.711535  248718 machine.go:91] provisioned docker machine in 1.133056882s
	I1031 00:12:36.711550  248718 start.go:300] post-start starting for "embed-certs-078843" (driver="kvm2")
	I1031 00:12:36.711563  248718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:12:36.711587  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.711984  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:12:36.712027  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.714954  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715374  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.715408  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.715610  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.715815  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.716019  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.716192  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.803613  248718 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:12:36.808855  248718 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:12:36.808888  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:12:36.808973  248718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:12:36.809100  248718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:12:36.809240  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:12:36.818339  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:36.845738  248718 start.go:303] post-start completed in 134.172265ms
	I1031 00:12:36.845765  248718 fix.go:56] fixHost completed within 20.4559017s
	I1031 00:12:36.845788  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.848249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848592  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.848621  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.848861  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.849120  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849307  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.849462  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.849659  248718 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:36.850033  248718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.2 22 <nil> <nil>}
	I1031 00:12:36.850047  248718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:12:36.966267  248718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711156.912809532
	
	I1031 00:12:36.966293  248718 fix.go:206] guest clock: 1698711156.912809532
	I1031 00:12:36.966303  248718 fix.go:219] Guest: 2023-10-31 00:12:36.912809532 +0000 UTC Remote: 2023-10-31 00:12:36.845768911 +0000 UTC m=+239.388163644 (delta=67.040621ms)
	I1031 00:12:36.966329  248718 fix.go:190] guest clock delta is within tolerance: 67.040621ms
	I1031 00:12:36.966341  248718 start.go:83] releasing machines lock for "embed-certs-078843", held for 20.576516085s
	I1031 00:12:36.966380  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.967388  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:36.970301  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970734  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.970766  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.970934  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971468  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971683  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:12:36.971781  248718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:12:36.971832  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.971921  248718 ssh_runner.go:195] Run: cat /version.json
	I1031 00:12:36.971951  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:12:36.974873  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975244  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975323  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975420  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975692  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:36.975718  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:36.975759  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975901  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:12:36.975959  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976068  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:12:36.976221  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976279  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:12:36.976358  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:36.977011  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:12:37.095751  248718 ssh_runner.go:195] Run: systemctl --version
	I1031 00:12:37.101600  248718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:12:37.244676  248718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:12:37.253623  248718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:12:37.253702  248718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:12:37.272872  248718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:12:37.272897  248718 start.go:472] detecting cgroup driver to use...
	I1031 00:12:37.272992  248718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:12:37.290899  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:12:37.306570  248718 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:12:37.306633  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:12:37.321827  248718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:12:37.336787  248718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:12:37.451589  248718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:12:37.571290  248718 docker.go:214] disabling docker service ...
	I1031 00:12:37.571375  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:12:37.587764  248718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:12:37.600627  248718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:12:37.733539  248718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:12:37.850154  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:12:37.865463  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:12:37.883661  248718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:12:37.883728  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.892717  248718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:12:37.892783  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.901944  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.911061  248718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:12:37.920094  248718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:12:37.929520  248718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:12:37.937333  248718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:12:37.937404  248718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:12:37.949591  248718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:12:37.960061  248718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:12:38.076354  248718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:12:38.250618  248718 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:12:38.250688  248718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:12:38.255979  248718 start.go:540] Will wait 60s for crictl version
	I1031 00:12:38.256036  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:12:38.259822  248718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:12:38.299812  248718 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:12:38.299981  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.343088  248718 ssh_runner.go:195] Run: crio --version
	I1031 00:12:38.397252  248718 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:36.993369  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Start
	I1031 00:12:36.993641  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring networks are active...
	I1031 00:12:36.994545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network default is active
	I1031 00:12:36.994911  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Ensuring network mk-default-k8s-diff-port-892233 is active
	I1031 00:12:36.995448  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Getting domain xml...
	I1031 00:12:36.996378  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Creating domain...
	I1031 00:12:38.342502  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting to get IP...
	I1031 00:12:38.343505  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344038  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.344115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.344004  249635 retry.go:31] will retry after 206.530958ms: waiting for machine to come up
	I1031 00:12:38.552789  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553109  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.553140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.553059  249635 retry.go:31] will retry after 272.962928ms: waiting for machine to come up
	I1031 00:12:38.827741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828288  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:38.828326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:38.828242  249635 retry.go:31] will retry after 411.85264ms: waiting for machine to come up
	I1031 00:12:35.048294  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1031 00:12:35.048344  248387 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:35.048404  248387 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1031 00:12:36.902739  248387 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.854307965s)
	I1031 00:12:36.902771  248387 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1031 00:12:36.902803  248387 cache_images.go:123] Successfully loaded all cached images
	I1031 00:12:36.902810  248387 cache_images.go:92] LoadImages completed in 19.027785915s
	I1031 00:12:36.902926  248387 ssh_runner.go:195] Run: crio config
	I1031 00:12:36.961891  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:36.961922  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:36.961950  248387 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:36.961992  248387 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.168 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-640155 NodeName:no-preload-640155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:36.962203  248387 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-640155"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:36.962312  248387 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-640155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:36.962389  248387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:36.973945  248387 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:36.974026  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:36.987534  248387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1031 00:12:37.006510  248387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:37.025092  248387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1031 00:12:37.045090  248387 ssh_runner.go:195] Run: grep 192.168.61.168	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:37.048822  248387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:37.061985  248387 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155 for IP: 192.168.61.168
	I1031 00:12:37.062026  248387 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:37.062243  248387 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:37.062310  248387 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:37.062410  248387 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.key
	I1031 00:12:37.062508  248387 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key.96e3443b
	I1031 00:12:37.062570  248387 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key
	I1031 00:12:37.062707  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:37.062750  248387 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:37.062767  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:37.062832  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:37.062877  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:37.062923  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:37.062987  248387 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:37.063745  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:37.090011  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:37.119401  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:37.148361  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:12:37.173730  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:37.197769  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:37.221625  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:37.244497  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:37.274559  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:37.300372  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:37.332082  248387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:37.361826  248387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:37.380561  248387 ssh_runner.go:195] Run: openssl version
	I1031 00:12:37.386185  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:37.396710  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401896  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.401983  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:37.407778  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:37.418091  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:37.427985  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432581  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.432649  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:37.438103  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:37.447792  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:37.457689  248387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462421  248387 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.462495  248387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:37.468482  248387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:37.478565  248387 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:37.483264  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:37.491175  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:37.498212  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:37.504019  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:37.509730  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:37.516218  248387 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:37.523364  248387 kubeadm.go:404] StartCluster: {Name:no-preload-640155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-640155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:37.523465  248387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:37.523522  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:37.576223  248387 cri.go:89] found id: ""
	I1031 00:12:37.576314  248387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:37.586094  248387 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:37.586133  248387 kubeadm.go:636] restartCluster start
	I1031 00:12:37.586217  248387 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:37.595614  248387 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.596791  248387 kubeconfig.go:92] found "no-preload-640155" server: "https://192.168.61.168:8443"
	I1031 00:12:37.600710  248387 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:37.610066  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.610137  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.620501  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:37.620528  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:37.620578  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:37.630477  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.131205  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.131335  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.144627  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.631491  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:38.631587  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:38.647034  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.131616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.131749  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.148723  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:39.631171  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:39.631273  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:39.645807  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:38.398862  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetIP
	I1031 00:12:38.401804  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402158  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:12:38.402193  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:12:38.402475  248718 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1031 00:12:38.407041  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:38.421147  248718 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:12:38.421228  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:38.461162  248718 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:12:38.461240  248718 ssh_runner.go:195] Run: which lz4
	I1031 00:12:38.465401  248718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:12:38.469796  248718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:12:38.469833  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:12:40.419642  248718 crio.go:444] Took 1.954260 seconds to copy over tarball
	I1031 00:12:40.419721  248718 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:12:39.241872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242407  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.242465  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.242347  249635 retry.go:31] will retry after 371.774477ms: waiting for machine to come up
	I1031 00:12:39.616171  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616708  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:39.616747  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:39.616671  249635 retry.go:31] will retry after 487.120901ms: waiting for machine to come up
	I1031 00:12:40.105492  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106116  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.106151  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.106066  249635 retry.go:31] will retry after 767.19349ms: waiting for machine to come up
	I1031 00:12:40.875432  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.875932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:40.876009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:40.875892  249635 retry.go:31] will retry after 976.411998ms: waiting for machine to come up
	I1031 00:12:41.854227  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854759  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:41.854794  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:41.854691  249635 retry.go:31] will retry after 1.041793781s: waiting for machine to come up
	I1031 00:12:42.898223  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898628  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:42.898658  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:42.898577  249635 retry.go:31] will retry after 1.163252223s: waiting for machine to come up
	I1031 00:12:44.064217  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064593  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:44.064626  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:44.064543  249635 retry.go:31] will retry after 1.879015473s: waiting for machine to come up
	I1031 00:12:40.131216  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.131331  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.146846  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:40.630673  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:40.630747  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:40.642955  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.131275  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.131410  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.144530  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:41.631108  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:41.631219  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:41.645873  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.131506  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.131641  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.147504  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:42.630664  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:42.630769  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:42.645755  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.131375  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.131503  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.143357  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.631616  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:43.631714  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:43.647203  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.130693  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.130791  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.143566  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.630736  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.630816  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.642486  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:43.535831  248718 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.116078442s)
	I1031 00:12:43.535864  248718 crio.go:451] Took 3.116189 seconds to extract the tarball
	I1031 00:12:43.535877  248718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:12:43.579902  248718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:12:43.635701  248718 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:12:43.635724  248718 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:12:43.635796  248718 ssh_runner.go:195] Run: crio config
	I1031 00:12:43.714916  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:12:43.714939  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:43.714958  248718 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:12:43.714976  248718 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-078843 NodeName:embed-certs-078843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:12:43.715146  248718 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-078843"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:12:43.715232  248718 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-078843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:12:43.715295  248718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:12:43.726847  248718 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:12:43.726938  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:12:43.738352  248718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1031 00:12:43.756439  248718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:12:43.773955  248718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1031 00:12:43.793790  248718 ssh_runner.go:195] Run: grep 192.168.50.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:12:43.798155  248718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:12:43.811602  248718 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843 for IP: 192.168.50.2
	I1031 00:12:43.811649  248718 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:12:43.811819  248718 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:12:43.811877  248718 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:12:43.811963  248718 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/client.key
	I1031 00:12:43.812051  248718 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key.e10f976c
	I1031 00:12:43.812117  248718 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key
	I1031 00:12:43.812261  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:12:43.812301  248718 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:12:43.812317  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:12:43.812359  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:12:43.812395  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:12:43.812430  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:12:43.812491  248718 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:12:43.813192  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:12:43.841097  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:12:43.867995  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:12:43.892834  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/embed-certs-078843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:12:43.917649  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:12:43.942299  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:12:43.971154  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:12:43.995032  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:12:44.022277  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:12:44.047549  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:12:44.071370  248718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:12:44.095933  248718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:12:44.113479  248718 ssh_runner.go:195] Run: openssl version
	I1031 00:12:44.119266  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:12:44.133710  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140098  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.140180  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:12:44.146416  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:12:44.159207  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:12:44.171618  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178288  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.178375  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:12:44.186339  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:12:44.200864  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:12:44.212513  248718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217549  248718 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.217616  248718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:12:44.225170  248718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:12:44.239600  248718 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:12:44.244470  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:12:44.252637  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:12:44.260635  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:12:44.269017  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:12:44.277210  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:12:44.285394  248718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:12:44.293419  248718 kubeadm.go:404] StartCluster: {Name:embed-certs-078843 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-078843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:12:44.293507  248718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:12:44.293620  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:44.339212  248718 cri.go:89] found id: ""
	I1031 00:12:44.339302  248718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:12:44.350219  248718 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:12:44.350249  248718 kubeadm.go:636] restartCluster start
	I1031 00:12:44.350315  248718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:12:44.360185  248718 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.361826  248718 kubeconfig.go:92] found "embed-certs-078843" server: "https://192.168.50.2:8443"
	I1031 00:12:44.365579  248718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:12:44.376923  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.377021  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.390684  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.390708  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.390768  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.404614  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:44.905332  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:44.905451  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:44.918162  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.405760  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.405845  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.419071  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.905669  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.905770  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.922243  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.404757  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.404870  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.419662  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.905223  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.905328  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.919993  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.405571  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.405660  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.418433  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.944837  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945386  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:45.945422  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:45.945318  249635 retry.go:31] will retry after 1.840120385s: waiting for machine to come up
	I1031 00:12:47.787276  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787807  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:47.787844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:47.787751  249635 retry.go:31] will retry after 2.306470153s: waiting for machine to come up
	I1031 00:12:45.131185  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.225229  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.237425  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:45.630872  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:45.630948  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:45.644580  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.131199  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.131280  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.142872  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:46.631467  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:46.631545  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:46.648339  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.130861  248387 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.131000  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.146189  248387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:47.610939  248387 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:47.610999  248387 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:47.611016  248387 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:47.611107  248387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:47.656888  248387 cri.go:89] found id: ""
	I1031 00:12:47.656982  248387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:47.678724  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:47.688879  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:47.688985  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697091  248387 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:47.697115  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:47.837056  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.448497  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.639877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.735406  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:48.824428  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:48.824521  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:48.840207  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.357050  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:49.857029  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:47.905449  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:47.905552  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:47.921939  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.405557  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.405656  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.417674  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:48.905114  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:48.905225  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:48.919218  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.404811  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.404908  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.420062  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:49.905655  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:49.905769  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:49.922828  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.405471  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.405578  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.423259  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.904727  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:50.904819  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:50.920673  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.405155  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.405246  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.421731  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:51.905024  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:51.905101  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:51.919385  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:52.404843  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.404985  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.420088  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:50.095827  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096326  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:50.096365  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:50.096281  249635 retry.go:31] will retry after 3.872051375s: waiting for machine to come up
	I1031 00:12:53.970393  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970918  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | unable to find current IP address of domain default-k8s-diff-port-892233 in network mk-default-k8s-diff-port-892233
	I1031 00:12:53.970956  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | I1031 00:12:53.970839  249635 retry.go:31] will retry after 5.345847198s: waiting for machine to come up
	I1031 00:12:50.357101  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:50.857024  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.357298  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:51.380143  248387 api_server.go:72] duration metric: took 2.555721824s to wait for apiserver process to appear ...
	I1031 00:12:51.380180  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:51.380220  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.457683  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.457719  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:54.457733  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:54.509385  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:12:54.509424  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:12:55.010185  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.017172  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.017201  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:55.510171  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:55.517062  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:12:55.517114  248387 api_server.go:103] status: https://192.168.61.168:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:12:56.009671  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:12:56.017135  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:12:56.026278  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:12:56.026307  248387 api_server.go:131] duration metric: took 4.646117858s to wait for apiserver health ...
	I1031 00:12:56.026319  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:12:56.026331  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:12:56.028208  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:12:52.904735  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:52.904835  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:52.917320  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.405426  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.405546  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.420386  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:53.904921  248718 api_server.go:166] Checking apiserver status ...
	I1031 00:12:53.905039  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:12:53.917303  248718 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:12:54.377921  248718 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:12:54.377976  248718 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:12:54.377991  248718 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:12:54.378079  248718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:12:54.418685  248718 cri.go:89] found id: ""
	I1031 00:12:54.418768  248718 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:12:54.436536  248718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:12:54.451466  248718 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:12:54.451534  248718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464460  248718 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:12:54.464484  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:54.601286  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.468262  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.664604  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.761171  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:55.838690  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:12:55.838793  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:55.857817  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.379368  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.878782  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:57.379756  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:56.029552  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:12:56.078774  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:12:56.128262  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:12:56.139995  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:12:56.140025  248387 system_pods.go:61] "coredns-5dd5756b68-qbvjb" [92f771d8-381b-4e38-945f-ad5ceae72b80] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:12:56.140035  248387 system_pods.go:61] "etcd-no-preload-640155" [44fcbc32-757b-4406-97ed-88ad76ae4eee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:12:56.140042  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [b92b3dec-827f-4221-8c28-83a738186e52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:12:56.140048  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [62661788-bde2-42b9-9469-a2f2c51ee283] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:12:56.140057  248387 system_pods.go:61] "kube-proxy-rv76j" [293b1dd9-fc85-4647-91c9-874ad357d222] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:12:56.140063  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [6a11d962-b407-467e-b8a0-9a101b16e4d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:12:56.140076  248387 system_pods.go:61] "metrics-server-57f55c9bc5-nm8dj" [3924727e-2734-497d-b1b1-d8f9a0ab095a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:12:56.140090  248387 system_pods.go:61] "storage-provisioner" [f8e0a3fa-eaf1-45e1-afbc-a5b2287e7703] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:12:56.140100  248387 system_pods.go:74] duration metric: took 11.816257ms to wait for pod list to return data ...
	I1031 00:12:56.140110  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:12:56.143298  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:12:56.143327  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:12:56.143365  248387 node_conditions.go:105] duration metric: took 3.247248ms to run NodePressure ...
	I1031 00:12:56.143402  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:12:56.398227  248387 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403101  248387 kubeadm.go:787] kubelet initialised
	I1031 00:12:56.403124  248387 kubeadm.go:788] duration metric: took 4.866042ms waiting for restarted kubelet to initialise ...
	I1031 00:12:56.403134  248387 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:12:56.408758  248387 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.416185  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416218  248387 pod_ready.go:81] duration metric: took 7.431969ms waiting for pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.416229  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "coredns-5dd5756b68-qbvjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.416238  248387 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.421589  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421611  248387 pod_ready.go:81] duration metric: took 5.364261ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.421619  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "etcd-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.421624  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.427046  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427075  248387 pod_ready.go:81] duration metric: took 5.443698ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.427086  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-apiserver-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.427098  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:56.534169  248387 pod_ready.go:97] node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534224  248387 pod_ready.go:81] duration metric: took 107.102474ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	E1031 00:12:56.534241  248387 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-640155" hosting pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-640155" has status "Ready":"False"
	I1031 00:12:56.534255  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332793  248387 pod_ready.go:92] pod "kube-proxy-rv76j" in "kube-system" namespace has status "Ready":"True"
	I1031 00:12:57.332824  248387 pod_ready.go:81] duration metric: took 798.55794ms waiting for pod "kube-proxy-rv76j" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:57.332838  248387 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:12:59.642186  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:00.818958  248084 start.go:369] acquired machines lock for "old-k8s-version-225140" in 1m2.435313483s
	I1031 00:13:00.819017  248084 start.go:96] Skipping create...Using existing machine configuration
	I1031 00:13:00.819032  248084 fix.go:54] fixHost starting: 
	I1031 00:13:00.819456  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:00.819490  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:00.838737  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I1031 00:13:00.839191  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:00.839773  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:13:00.839794  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:00.840290  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:00.840514  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:00.840697  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:13:00.843346  248084 fix.go:102] recreateIfNeeded on old-k8s-version-225140: state=Stopped err=<nil>
	I1031 00:13:00.843381  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	W1031 00:13:00.843658  248084 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 00:13:00.848997  248084 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-225140" ...
	I1031 00:12:59.318443  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319011  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Found IP for machine: 192.168.39.2
	I1031 00:12:59.319037  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserving static IP address...
	I1031 00:12:59.319070  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has current primary IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.319522  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.319557  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Reserved static IP address: 192.168.39.2
	I1031 00:12:59.319595  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | skip adding static IP to network mk-default-k8s-diff-port-892233 - found existing host DHCP lease matching {name: "default-k8s-diff-port-892233", mac: "52:54:00:f4:e2:1e", ip: "192.168.39.2"}
	I1031 00:12:59.319620  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Getting to WaitForSSH function...
	I1031 00:12:59.319653  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Waiting for SSH to be available...
	I1031 00:12:59.322357  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322780  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.322819  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.322938  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH client type: external
	I1031 00:12:59.322969  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa (-rw-------)
	I1031 00:12:59.323009  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:12:59.323029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | About to run SSH command:
	I1031 00:12:59.323064  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | exit 0
	I1031 00:12:59.421581  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | SSH cmd err, output: <nil>: 
	I1031 00:12:59.421963  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetConfigRaw
	I1031 00:12:59.422651  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.425540  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.425916  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.425961  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.426201  249055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/config.json ...
	I1031 00:12:59.426454  249055 machine.go:88] provisioning docker machine ...
	I1031 00:12:59.426481  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:12:59.426720  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.426879  249055 buildroot.go:166] provisioning hostname "default-k8s-diff-port-892233"
	I1031 00:12:59.426898  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.427067  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.429588  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.429937  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.429975  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.430208  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.430403  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430573  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.430690  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.430852  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.431368  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.431386  249055 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-892233 && echo "default-k8s-diff-port-892233" | sudo tee /etc/hostname
	I1031 00:12:59.572253  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-892233
	
	I1031 00:12:59.572299  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.575534  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.575858  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.575919  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.576140  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:12:59.576366  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576592  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:12:59.576766  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:12:59.576919  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:12:59.577349  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:12:59.577372  249055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-892233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-892233/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-892233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:12:59.714987  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:12:59.715020  249055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:12:59.715079  249055 buildroot.go:174] setting up certificates
	I1031 00:12:59.715094  249055 provision.go:83] configureAuth start
	I1031 00:12:59.715115  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetMachineName
	I1031 00:12:59.715440  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:12:59.718485  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.718900  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.718932  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.719039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:12:59.721488  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.721844  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:12:59.721874  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:12:59.722068  249055 provision.go:138] copyHostCerts
	I1031 00:12:59.722141  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:12:59.722155  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:12:59.722227  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:12:59.722363  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:12:59.722377  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:12:59.722402  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:12:59.722528  249055 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:12:59.722538  249055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:12:59.722560  249055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:12:59.722619  249055 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-892233 san=[192.168.39.2 192.168.39.2 localhost 127.0.0.1 minikube default-k8s-diff-port-892233]
	I1031 00:13:00.038821  249055 provision.go:172] copyRemoteCerts
	I1031 00:13:00.038892  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:00.038924  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.042237  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042585  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.042627  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.042753  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.042976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.043252  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.043410  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.130665  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:00.158853  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1031 00:13:00.188023  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 00:13:00.214990  249055 provision.go:86] duration metric: configureAuth took 499.878655ms
	I1031 00:13:00.215020  249055 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:00.215284  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:00.215445  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.218339  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.218821  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.218861  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.219039  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.219282  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219500  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.219672  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.219873  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.220371  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.220411  249055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:00.567578  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:00.567663  249055 machine.go:91] provisioned docker machine in 1.141189726s
	I1031 00:13:00.567680  249055 start.go:300] post-start starting for "default-k8s-diff-port-892233" (driver="kvm2")
	I1031 00:13:00.567695  249055 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:00.567719  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.568094  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:00.568134  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.570983  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571434  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.571478  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.571649  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.571849  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.572010  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.572173  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.660300  249055 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:00.665751  249055 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:00.665779  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:00.665853  249055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:00.665958  249055 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:00.666046  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:00.677668  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:00.702125  249055 start.go:303] post-start completed in 134.425173ms
	I1031 00:13:00.702165  249055 fix.go:56] fixHost completed within 23.735576451s
	I1031 00:13:00.702195  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.705554  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.705976  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.706029  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.706319  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.706545  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706722  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.706872  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.707040  249055 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:00.707449  249055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1031 00:13:00.707470  249055 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:00.818749  249055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711180.762641951
	
	I1031 00:13:00.818785  249055 fix.go:206] guest clock: 1698711180.762641951
	I1031 00:13:00.818797  249055 fix.go:219] Guest: 2023-10-31 00:13:00.762641951 +0000 UTC Remote: 2023-10-31 00:13:00.70217124 +0000 UTC m=+181.580385758 (delta=60.470711ms)
	I1031 00:13:00.818850  249055 fix.go:190] guest clock delta is within tolerance: 60.470711ms
	I1031 00:13:00.818861  249055 start.go:83] releasing machines lock for "default-k8s-diff-port-892233", held for 23.852333569s
	I1031 00:13:00.818897  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.819199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:00.822674  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823152  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.823194  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.823436  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824107  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824336  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:13:00.824543  249055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:00.824603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.824669  249055 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:00.824698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:13:00.827622  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828092  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828149  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828176  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828377  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:00.828420  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:00.828477  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828558  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:13:00.828638  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828741  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:13:00.828817  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.828926  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:13:00.829014  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.829694  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:13:00.945937  249055 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:00.951731  249055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:01.099346  249055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:01.106701  249055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:01.106789  249055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:01.122651  249055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:01.122738  249055 start.go:472] detecting cgroup driver to use...
	I1031 00:13:01.122839  249055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:01.140968  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:01.159184  249055 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:01.159267  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:01.176636  249055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:01.190420  249055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:01.304327  249055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:01.446312  249055 docker.go:214] disabling docker service ...
	I1031 00:13:01.446440  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:01.462043  249055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:01.478402  249055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:01.618099  249055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:01.745376  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:01.758262  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:01.774927  249055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1031 00:13:01.774999  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.784376  249055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:01.784441  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.793769  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.802954  249055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:01.813429  249055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:01.822730  249055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:01.832032  249055 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:01.832103  249055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:01.845005  249055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:01.855358  249055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:01.997815  249055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:02.229016  249055 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:02.229090  249055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:02.233980  249055 start.go:540] Will wait 60s for crictl version
	I1031 00:13:02.234044  249055 ssh_runner.go:195] Run: which crictl
	I1031 00:13:02.237901  249055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:02.280450  249055 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:02.280562  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.326608  249055 ssh_runner.go:195] Run: crio --version
	I1031 00:13:02.381010  249055 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1031 00:12:57.879480  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.378990  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:12:58.401245  248718 api_server.go:72] duration metric: took 2.5625596s to wait for apiserver process to appear ...
	I1031 00:12:58.401294  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:12:58.401317  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.483261  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.483293  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:01.483309  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:01.586135  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:01.586172  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:02.086932  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.095676  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.095714  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:02.586339  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:02.599335  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:02.599376  248718 api_server.go:103] status: https://192.168.50.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:03.087312  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:13:03.095444  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:13:03.107809  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:03.107842  248718 api_server.go:131] duration metric: took 4.706538937s to wait for apiserver health ...
	I1031 00:13:03.107855  248718 cni.go:84] Creating CNI manager for ""
	I1031 00:13:03.107864  248718 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:03.110057  248718 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:02.382546  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetIP
	I1031 00:13:02.386646  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387022  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:13:02.387068  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:13:02.387291  249055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:02.393394  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:02.408630  249055 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:13:02.408723  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:02.461303  249055 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1031 00:13:02.461388  249055 ssh_runner.go:195] Run: which lz4
	I1031 00:13:02.466160  249055 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:02.472133  249055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:02.472175  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1031 00:13:01.647436  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.653247  248387 pod_ready.go:102] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:03.111616  248718 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:03.142561  248718 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:03.210454  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:03.229202  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:03.229253  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:03.229269  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:03.229278  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:03.229289  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:03.229302  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:03.229321  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:03.229339  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:03.229353  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:03.229369  248718 system_pods.go:74] duration metric: took 18.888134ms to wait for pod list to return data ...
	I1031 00:13:03.229379  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:03.269761  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:03.269808  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:03.269821  248718 node_conditions.go:105] duration metric: took 40.435389ms to run NodePressure ...
	I1031 00:13:03.269843  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:03.828792  248718 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840423  248718 kubeadm.go:787] kubelet initialised
	I1031 00:13:03.840449  248718 kubeadm.go:788] duration metric: took 11.631934ms waiting for restarted kubelet to initialise ...
	I1031 00:13:03.840461  248718 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:03.856214  248718 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.885090  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885128  248718 pod_ready.go:81] duration metric: took 28.821802ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.885141  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.885169  248718 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.903365  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903468  248718 pod_ready.go:81] duration metric: took 18.286782ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.903494  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "etcd-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.903516  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.918470  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918511  248718 pod_ready.go:81] duration metric: took 14.954407ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.918536  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.918548  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:03.933999  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934040  248718 pod_ready.go:81] duration metric: took 15.480835ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:03.934057  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:03.934068  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.237338  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237374  248718 pod_ready.go:81] duration metric: took 303.296061ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.237389  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-proxy-287dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.237398  248718 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:04.634179  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634222  248718 pod_ready.go:81] duration metric: took 396.814691ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:04.634238  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.634253  248718 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.035746  248718 pod_ready.go:97] node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035785  248718 pod_ready.go:81] duration metric: took 401.520697ms waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:13:05.035801  248718 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-078843" hosting pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:05.035816  248718 pod_ready.go:38] duration metric: took 1.195339888s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:05.035852  248718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:13:05.053467  248718 ops.go:34] apiserver oom_adj: -16
	I1031 00:13:05.053499  248718 kubeadm.go:640] restartCluster took 20.703241237s
	I1031 00:13:05.053510  248718 kubeadm.go:406] StartCluster complete in 20.760104259s
	I1031 00:13:05.053534  248718 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.053649  248718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:13:05.056586  248718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:05.056927  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:13:05.057035  248718 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:13:05.057123  248718 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-078843"
	I1031 00:13:05.057141  248718 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-078843"
	W1031 00:13:05.057149  248718 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:13:05.057204  248718 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:13:05.057234  248718 addons.go:69] Setting default-storageclass=true in profile "embed-certs-078843"
	I1031 00:13:05.057211  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.057248  248718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-078843"
	I1031 00:13:05.057647  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057682  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057706  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.057743  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.057816  248718 addons.go:69] Setting metrics-server=true in profile "embed-certs-078843"
	I1031 00:13:05.057835  248718 addons.go:231] Setting addon metrics-server=true in "embed-certs-078843"
	W1031 00:13:05.057846  248718 addons.go:240] addon metrics-server should already be in state true
	I1031 00:13:05.057940  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.058407  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.058492  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.077590  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I1031 00:13:05.077948  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I1031 00:13:05.078081  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078347  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.078769  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.078785  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079028  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.079054  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.079408  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085132  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.085145  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I1031 00:13:05.085597  248718 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-078843" context rescaled to 1 replicas
	I1031 00:13:05.085640  248718 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:13:05.088029  248718 out.go:177] * Verifying Kubernetes components...
	I1031 00:13:05.085726  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.085922  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.086067  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.089646  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:13:05.089718  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.090571  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.090592  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.091096  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.091945  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.092003  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.095067  248718 addons.go:231] Setting addon default-storageclass=true in "embed-certs-078843"
	W1031 00:13:05.095093  248718 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:13:05.095131  248718 host.go:66] Checking if "embed-certs-078843" exists ...
	I1031 00:13:05.095551  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.095608  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.111102  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1031 00:13:05.111739  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.112393  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.112413  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.112797  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.112983  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.114423  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1031 00:13:05.114993  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.115615  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.115634  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.115848  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.116042  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.118503  248718 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:13:05.116288  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.120126  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:13:05.120149  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:13:05.120184  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.120637  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1031 00:13:05.121136  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.121582  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.121601  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.122054  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.122163  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.122536  248718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:13:05.122576  248718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:13:05.124417  248718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:00.852003  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Start
	I1031 00:13:00.853038  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring networks are active...
	I1031 00:13:00.853268  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network default is active
	I1031 00:13:00.853774  248084 main.go:141] libmachine: (old-k8s-version-225140) Ensuring network mk-old-k8s-version-225140 is active
	I1031 00:13:00.854290  248084 main.go:141] libmachine: (old-k8s-version-225140) Getting domain xml...
	I1031 00:13:00.855089  248084 main.go:141] libmachine: (old-k8s-version-225140) Creating domain...
	I1031 00:13:02.250983  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting to get IP...
	I1031 00:13:02.251883  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.252351  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.252421  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.252327  249826 retry.go:31] will retry after 242.989359ms: waiting for machine to come up
	I1031 00:13:02.497099  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.497647  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.497671  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.497581  249826 retry.go:31] will retry after 267.660992ms: waiting for machine to come up
	I1031 00:13:02.767445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:02.770812  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:02.770846  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:02.770757  249826 retry.go:31] will retry after 311.592507ms: waiting for machine to come up
	I1031 00:13:03.085650  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.086233  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.086262  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.086139  249826 retry.go:31] will retry after 594.222148ms: waiting for machine to come up
	I1031 00:13:03.681721  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:03.682255  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:03.682286  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:03.682147  249826 retry.go:31] will retry after 758.043103ms: waiting for machine to come up
	I1031 00:13:04.442274  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:04.443048  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:04.443078  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:04.442997  249826 retry.go:31] will retry after 887.518169ms: waiting for machine to come up
	I1031 00:13:05.332541  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:05.333184  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:05.333212  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:05.333129  249826 retry.go:31] will retry after 851.434462ms: waiting for machine to come up
	I1031 00:13:05.125889  248718 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.125912  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:13:05.125931  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.124466  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.126004  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.126025  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.125276  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.126198  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.126338  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.126414  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.131827  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.131844  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.131883  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.131916  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.132049  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.132274  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.132420  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.144729  248718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I1031 00:13:05.145178  248718 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:13:05.145775  248718 main.go:141] libmachine: Using API Version  1
	I1031 00:13:05.145795  248718 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:13:05.146202  248718 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:13:05.146381  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetState
	I1031 00:13:05.149644  248718 main.go:141] libmachine: (embed-certs-078843) Calling .DriverName
	I1031 00:13:05.150317  248718 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.150332  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:13:05.150350  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHHostname
	I1031 00:13:05.153417  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.153915  248718 main.go:141] libmachine: (embed-certs-078843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a8:73", ip: ""} in network mk-embed-certs-078843: {Iface:virbr2 ExpiryTime:2023-10-31 01:12:29 +0000 UTC Type:0 Mac:52:54:00:f5:a8:73 Iaid: IPaddr:192.168.50.2 Prefix:24 Hostname:embed-certs-078843 Clientid:01:52:54:00:f5:a8:73}
	I1031 00:13:05.153956  248718 main.go:141] libmachine: (embed-certs-078843) DBG | domain embed-certs-078843 has defined IP address 192.168.50.2 and MAC address 52:54:00:f5:a8:73 in network mk-embed-certs-078843
	I1031 00:13:05.154082  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHPort
	I1031 00:13:05.154266  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHKeyPath
	I1031 00:13:05.154606  248718 main.go:141] libmachine: (embed-certs-078843) Calling .GetSSHUsername
	I1031 00:13:05.154731  248718 sshutil.go:53] new ssh client: &{IP:192.168.50.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/embed-certs-078843/id_rsa Username:docker}
	I1031 00:13:05.279166  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:13:05.279209  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:13:05.314989  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:13:05.318765  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:13:05.337844  248718 node_ready.go:35] waiting up to 6m0s for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:05.338209  248718 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 00:13:05.343889  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:13:05.343913  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:13:05.391973  248718 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:05.392002  248718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:13:05.442745  248718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.503163864s)
	I1031 00:13:06.822030  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822047  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.821970  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.506945748s)
	I1031 00:13:06.822097  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822123  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822539  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822568  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.822594  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822620  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.822641  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822654  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822665  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.822689  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.822702  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.822711  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.823128  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823187  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823196  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.823249  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.823286  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.823305  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.838726  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.838749  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.839036  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.839101  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.839124  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.863966  248718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.421170822s)
	I1031 00:13:06.864085  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864105  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.864472  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.864499  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.864511  248718 main.go:141] libmachine: Making call to close driver server
	I1031 00:13:06.864520  248718 main.go:141] libmachine: (embed-certs-078843) Calling .Close
	I1031 00:13:06.865117  248718 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:13:06.865133  248718 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:13:06.865136  248718 main.go:141] libmachine: (embed-certs-078843) DBG | Closing plugin on server side
	I1031 00:13:06.865144  248718 addons.go:467] Verifying addon metrics-server=true in "embed-certs-078843"
	I1031 00:13:06.868351  248718 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:13:06.869950  248718 addons.go:502] enable addons completed in 1.812918702s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:13:07.438581  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:04.402138  249055 crio.go:444] Took 1.936056 seconds to copy over tarball
	I1031 00:13:04.402221  249055 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:07.956805  249055 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.554540356s)
	I1031 00:13:07.956841  249055 crio.go:451] Took 3.554667 seconds to extract the tarball
	I1031 00:13:07.956854  249055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:08.017763  249055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:08.072921  249055 crio.go:496] all images are preloaded for cri-o runtime.
	I1031 00:13:08.072982  249055 cache_images.go:84] Images are preloaded, skipping loading
	I1031 00:13:08.073063  249055 ssh_runner.go:195] Run: crio config
	I1031 00:13:08.131013  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:08.131045  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:08.131070  249055 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:08.131099  249055 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-892233 NodeName:default-k8s-diff-port-892233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 00:13:08.131362  249055 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-892233"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:08.131583  249055 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-892233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1031 00:13:08.131658  249055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 00:13:08.140884  249055 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:08.140973  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:08.149405  249055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I1031 00:13:08.166006  249055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:08.182874  249055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1031 00:13:08.200304  249055 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:08.203993  249055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:08.217645  249055 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233 for IP: 192.168.39.2
	I1031 00:13:08.217692  249055 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:08.217873  249055 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:08.217924  249055 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:08.218015  249055 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.key
	I1031 00:13:08.308243  249055 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key.dd3b77ed
	I1031 00:13:08.308354  249055 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key
	I1031 00:13:08.308540  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:08.308606  249055 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:08.308626  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:08.308652  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:08.308678  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:08.308701  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:08.308743  249055 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:08.309489  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:08.339601  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:08.365873  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:08.393028  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 00:13:08.418983  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:08.445555  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:08.471234  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:08.496657  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:08.522698  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:08.546933  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:08.570645  249055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:08.596096  249055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:08.615431  249055 ssh_runner.go:195] Run: openssl version
	I1031 00:13:08.621901  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:08.633316  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638479  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.638546  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:08.644750  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:08.656306  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:08.669978  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.675964  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.676033  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:08.682433  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:08.694215  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:08.706255  249055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713046  249055 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.713147  249055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:08.720902  249055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:08.732062  249055 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:08.737112  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:08.745040  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:08.753046  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:08.759410  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:08.765847  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:08.772651  249055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:08.779086  249055 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-892233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-892233 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:08.779224  249055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:08.779292  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:08.832024  249055 cri.go:89] found id: ""
	I1031 00:13:08.832096  249055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:08.842618  249055 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:08.842641  249055 kubeadm.go:636] restartCluster start
	I1031 00:13:08.842716  249055 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:08.852209  249055 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.853480  249055 kubeconfig.go:92] found "default-k8s-diff-port-892233" server: "https://192.168.39.2:8444"
	I1031 00:13:08.855965  249055 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:08.865555  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.865617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.877258  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:08.877285  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:08.877332  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:08.887847  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:05.643929  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:05.643958  248387 pod_ready.go:81] duration metric: took 8.31111047s waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:05.643971  248387 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:07.946810  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:06.186224  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:06.186916  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:06.186948  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:06.186867  249826 retry.go:31] will retry after 964.405003ms: waiting for machine to come up
	I1031 00:13:07.153455  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:07.153973  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:07.154006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:07.153917  249826 retry.go:31] will retry after 1.515980724s: waiting for machine to come up
	I1031 00:13:08.671700  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:08.672189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:08.672219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:08.672117  249826 retry.go:31] will retry after 2.254841495s: waiting for machine to come up
	I1031 00:13:09.658372  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:11.938230  248718 node_ready.go:58] node "embed-certs-078843" has status "Ready":"False"
	I1031 00:13:12.439097  248718 node_ready.go:49] node "embed-certs-078843" has status "Ready":"True"
	I1031 00:13:12.439129  248718 node_ready.go:38] duration metric: took 7.101255254s waiting for node "embed-certs-078843" to be "Ready" ...
	I1031 00:13:12.439147  248718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:12.447673  248718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.469967  248718 pod_ready.go:92] pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.470002  248718 pod_ready.go:81] duration metric: took 22.292329ms waiting for pod "coredns-5dd5756b68-dqrs4" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.470017  248718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482061  248718 pod_ready.go:92] pod "etcd-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.482092  248718 pod_ready.go:81] duration metric: took 12.066806ms waiting for pod "etcd-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.482106  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489019  248718 pod_ready.go:92] pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.489052  248718 pod_ready.go:81] duration metric: took 6.936171ms waiting for pod "kube-apiserver-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.489066  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500686  248718 pod_ready.go:92] pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.500712  248718 pod_ready.go:81] duration metric: took 11.637946ms waiting for pod "kube-controller-manager-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.500722  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:09.388669  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.388776  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.400708  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:09.888027  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:09.888146  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:09.900678  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.388004  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.388114  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.403685  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.888198  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:10.888314  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:10.900608  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.388239  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.388363  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.404992  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:11.888425  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:11.888541  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:11.900436  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.388293  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.388418  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.404621  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:12.888037  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:12.888156  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:12.900860  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.388276  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.388371  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.400841  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:13.888124  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:13.888238  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:13.903041  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:10.168791  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:12.169662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.669047  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:10.928893  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:10.929414  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:10.929445  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:10.929369  249826 retry.go:31] will retry after 2.792980456s: waiting for machine to come up
	I1031 00:13:13.724006  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:13.724430  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:13.724469  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:13.724356  249826 retry.go:31] will retry after 2.555956413s: waiting for machine to come up
	I1031 00:13:12.838631  248718 pod_ready.go:92] pod "kube-proxy-287dq" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:12.838658  248718 pod_ready.go:81] duration metric: took 337.929955ms waiting for pod "kube-proxy-287dq" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:12.838668  248718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239513  248718 pod_ready.go:92] pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:13.239541  248718 pod_ready.go:81] duration metric: took 400.86714ms waiting for pod "kube-scheduler-embed-certs-078843" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:13.239552  248718 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:15.546507  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:14.388661  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.388736  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.402388  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:14.888855  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:14.888965  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:14.903137  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.388757  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.388868  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.404412  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:15.888848  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:15.888984  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:15.902181  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.388790  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.388913  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.402283  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:16.888892  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:16.889035  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:16.900677  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.388842  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.388983  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.401399  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:17.888981  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:17.889099  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:17.901474  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.387997  249055 api_server.go:166] Checking apiserver status ...
	I1031 00:13:18.388083  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:18.399745  249055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:18.866186  249055 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:18.866263  249055 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:18.866282  249055 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:18.866352  249055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:18.906125  249055 cri.go:89] found id: ""
	I1031 00:13:18.906214  249055 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:18.921555  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:18.930111  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:18.930193  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938516  249055 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:18.938545  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:19.070700  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:17.167517  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:19.170710  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:16.282473  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:16.282944  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:16.282975  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:16.282900  249826 retry.go:31] will retry after 2.811414756s: waiting for machine to come up
	I1031 00:13:19.096338  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:19.096738  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | unable to find current IP address of domain old-k8s-version-225140 in network mk-old-k8s-version-225140
	I1031 00:13:19.096760  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | I1031 00:13:19.096714  249826 retry.go:31] will retry after 3.844203493s: waiting for machine to come up
	I1031 00:13:17.548558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.047074  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.047691  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:20.139806  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069066882s)
	I1031 00:13:20.139847  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.337823  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.417915  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:20.499750  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:20.499831  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:20.515735  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.029420  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:21.529636  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.029757  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:22.529034  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.029479  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:23.055542  249055 api_server.go:72] duration metric: took 2.555800185s to wait for apiserver process to appear ...
	I1031 00:13:23.055573  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:23.055591  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:21.667545  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:24.167560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:22.943000  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.943492  248084 main.go:141] libmachine: (old-k8s-version-225140) Found IP for machine: 192.168.72.65
	I1031 00:13:22.943521  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserving static IP address...
	I1031 00:13:22.943540  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has current primary IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.944080  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.944120  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | skip adding static IP to network mk-old-k8s-version-225140 - found existing host DHCP lease matching {name: "old-k8s-version-225140", mac: "52:54:00:9c:98:61", ip: "192.168.72.65"}
	I1031 00:13:22.944139  248084 main.go:141] libmachine: (old-k8s-version-225140) Reserved static IP address: 192.168.72.65
	I1031 00:13:22.944160  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Getting to WaitForSSH function...
	I1031 00:13:22.944168  248084 main.go:141] libmachine: (old-k8s-version-225140) Waiting for SSH to be available...
	I1031 00:13:22.946799  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947189  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:22.947222  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:22.947416  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH client type: external
	I1031 00:13:22.947448  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa (-rw-------)
	I1031 00:13:22.947508  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:13:22.947534  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | About to run SSH command:
	I1031 00:13:22.947581  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | exit 0
	I1031 00:13:23.045850  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | SSH cmd err, output: <nil>: 
	I1031 00:13:23.046239  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetConfigRaw
	I1031 00:13:23.046996  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.050061  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050464  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.050496  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.050789  248084 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/config.json ...
	I1031 00:13:23.051046  248084 machine.go:88] provisioning docker machine ...
	I1031 00:13:23.051070  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:23.051289  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051484  248084 buildroot.go:166] provisioning hostname "old-k8s-version-225140"
	I1031 00:13:23.051511  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.051731  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.054157  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054603  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.054636  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.054784  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.055085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055291  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.055503  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.055718  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.056178  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.056203  248084 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-225140 && echo "old-k8s-version-225140" | sudo tee /etc/hostname
	I1031 00:13:23.184296  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-225140
	
	I1031 00:13:23.184356  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.187270  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187720  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.187761  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.187895  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.188085  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188228  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.188340  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.188565  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.189104  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.189135  248084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-225140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-225140/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-225140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 00:13:23.315792  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 00:13:23.315829  248084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17527-208817/.minikube CaCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17527-208817/.minikube}
	I1031 00:13:23.315893  248084 buildroot.go:174] setting up certificates
	I1031 00:13:23.315906  248084 provision.go:83] configureAuth start
	I1031 00:13:23.315921  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetMachineName
	I1031 00:13:23.316224  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:23.319690  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320111  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.320143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.320315  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.322897  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323334  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.323362  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.323720  248084 provision.go:138] copyHostCerts
	I1031 00:13:23.323803  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem, removing ...
	I1031 00:13:23.323820  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem
	I1031 00:13:23.323895  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/ca.pem (1078 bytes)
	I1031 00:13:23.324025  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem, removing ...
	I1031 00:13:23.324043  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem
	I1031 00:13:23.324080  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/cert.pem (1123 bytes)
	I1031 00:13:23.324257  248084 exec_runner.go:144] found /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem, removing ...
	I1031 00:13:23.324272  248084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem
	I1031 00:13:23.324313  248084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17527-208817/.minikube/key.pem (1679 bytes)
	I1031 00:13:23.324415  248084 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-225140 san=[192.168.72.65 192.168.72.65 localhost 127.0.0.1 minikube old-k8s-version-225140]
	I1031 00:13:23.580836  248084 provision.go:172] copyRemoteCerts
	I1031 00:13:23.580905  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 00:13:23.580929  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.584088  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584527  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.584576  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.584872  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.585115  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.585290  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.585440  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:23.680241  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1031 00:13:23.706003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 00:13:23.730993  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 00:13:23.760873  248084 provision.go:86] duration metric: configureAuth took 444.934236ms
	I1031 00:13:23.760909  248084 buildroot.go:189] setting minikube options for container-runtime
	I1031 00:13:23.761208  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:13:23.761370  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:23.764798  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765219  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:23.765273  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:23.765411  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:23.765646  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.765868  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:23.766036  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:23.766256  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:23.766762  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:23.766796  248084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1031 00:13:24.109914  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1031 00:13:24.109946  248084 machine.go:91] provisioned docker machine in 1.058882555s
	I1031 00:13:24.109958  248084 start.go:300] post-start starting for "old-k8s-version-225140" (driver="kvm2")
	I1031 00:13:24.109972  248084 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 00:13:24.109994  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.110392  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 00:13:24.110456  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.113825  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114298  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.114335  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.114587  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.114814  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.114989  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.115148  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.206997  248084 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 00:13:24.211439  248084 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 00:13:24.211467  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/addons for local assets ...
	I1031 00:13:24.211551  248084 filesync.go:126] Scanning /home/jenkins/minikube-integration/17527-208817/.minikube/files for local assets ...
	I1031 00:13:24.211635  248084 filesync.go:149] local asset: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem -> 2160052.pem in /etc/ssl/certs
	I1031 00:13:24.211722  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 00:13:24.219976  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:24.246337  248084 start.go:303] post-start completed in 136.360652ms
	I1031 00:13:24.246366  248084 fix.go:56] fixHost completed within 23.427336969s
	I1031 00:13:24.246389  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.249547  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.249876  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.249919  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.250099  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.250300  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250603  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.250815  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.251022  248084 main.go:141] libmachine: Using SSH client type: native
	I1031 00:13:24.251387  248084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1031 00:13:24.251413  248084 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 00:13:24.366477  248084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698711204.302770779
	
	I1031 00:13:24.366499  248084 fix.go:206] guest clock: 1698711204.302770779
	I1031 00:13:24.366507  248084 fix.go:219] Guest: 2023-10-31 00:13:24.302770779 +0000 UTC Remote: 2023-10-31 00:13:24.246369619 +0000 UTC m=+368.452785688 (delta=56.40116ms)
	I1031 00:13:24.366558  248084 fix.go:190] guest clock delta is within tolerance: 56.40116ms
	I1031 00:13:24.366570  248084 start.go:83] releasing machines lock for "old-k8s-version-225140", held for 23.547580429s
	I1031 00:13:24.366599  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.366871  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:24.369640  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.369985  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.370032  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.370155  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370695  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370910  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:13:24.370996  248084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 00:13:24.371044  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.371205  248084 ssh_runner.go:195] Run: cat /version.json
	I1031 00:13:24.371233  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:13:24.373962  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374315  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374349  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374379  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374621  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.374759  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:24.374796  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.374822  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:24.374952  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375018  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:13:24.375140  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:13:24.375139  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.375278  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:13:24.375383  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:13:24.490387  248084 ssh_runner.go:195] Run: systemctl --version
	I1031 00:13:24.497758  248084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1031 00:13:24.645967  248084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 00:13:24.652716  248084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 00:13:24.652795  248084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 00:13:24.668415  248084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 00:13:24.668446  248084 start.go:472] detecting cgroup driver to use...
	I1031 00:13:24.668513  248084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 00:13:24.683255  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 00:13:24.697242  248084 docker.go:198] disabling cri-docker service (if available) ...
	I1031 00:13:24.697295  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1031 00:13:24.710554  248084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1031 00:13:24.725562  248084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1031 00:13:24.847447  248084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1031 00:13:24.982382  248084 docker.go:214] disabling docker service ...
	I1031 00:13:24.982477  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 00:13:24.998270  248084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 00:13:25.011136  248084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 00:13:25.129421  248084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 00:13:25.258387  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 00:13:25.271528  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 00:13:25.291702  248084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1031 00:13:25.291788  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.301762  248084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1031 00:13:25.301826  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.311900  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.322111  248084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1031 00:13:25.331429  248084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 00:13:25.344907  248084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 00:13:25.354397  248084 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 00:13:25.354463  248084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 00:13:25.367335  248084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 00:13:25.376415  248084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 00:13:25.493551  248084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1031 00:13:25.677504  248084 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1031 00:13:25.677648  248084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1031 00:13:25.683882  248084 start.go:540] Will wait 60s for crictl version
	I1031 00:13:25.683952  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:25.687748  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 00:13:25.729230  248084 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1031 00:13:25.729316  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.782619  248084 ssh_runner.go:195] Run: crio --version
	I1031 00:13:25.832400  248084 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1031 00:13:25.833898  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetIP
	I1031 00:13:25.836924  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837347  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:13:25.837372  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:13:25.837666  248084 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1031 00:13:25.841940  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:24.051460  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.554325  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:26.499116  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.499157  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:26.499172  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:26.509898  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:26.509929  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:27.010543  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.024054  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.024104  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:27.510303  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:27.518621  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 00:13:27.518658  249055 api_server.go:103] status: https://192.168.39.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 00:13:28.010147  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:13:28.017834  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:13:28.027903  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:13:28.028005  249055 api_server.go:131] duration metric: took 4.972421145s to wait for apiserver health ...
	I1031 00:13:28.028033  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:13:28.028070  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:28.030427  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:28.032020  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:28.042889  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:28.084357  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:28.114368  249055 system_pods.go:59] 8 kube-system pods found
	I1031 00:13:28.114416  249055 system_pods.go:61] "coredns-5dd5756b68-6sbs7" [4cf52749-359c-42b7-a985-d2cdc3f20700] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 00:13:28.114430  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [75c06d7d-877d-4df8-9805-0ea50aec938f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 00:13:28.114440  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [6eb1d4f8-0594-4992-962c-383062853ed0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 00:13:28.114460  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [8b5e8ab9-34fe-4337-95d1-554adbd23505] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 00:13:28.114470  249055 system_pods.go:61] "kube-proxy-jn2j8" [23f4d9d7-61a0-43d9-a815-a4ce10a568e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 00:13:28.114479  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [dcb7e68d-4e3d-4e46-935a-1372309ad89c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 00:13:28.114488  249055 system_pods.go:61] "metrics-server-57f55c9bc5-7klqw" [3f832e2c-81b4-431e-b1a2-987057fdae0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:13:28.114502  249055 system_pods.go:61] "storage-provisioner" [b912cf02-280b-47e0-8e72-fd22566a40f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 00:13:28.114515  249055 system_pods.go:74] duration metric: took 30.127265ms to wait for pod list to return data ...
	I1031 00:13:28.114534  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:28.126920  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:28.126971  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:28.127018  249055 node_conditions.go:105] duration metric: took 12.476154ms to run NodePressure ...
	I1031 00:13:28.127048  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:28.402286  249055 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407352  249055 kubeadm.go:787] kubelet initialised
	I1031 00:13:28.407384  249055 kubeadm.go:788] duration metric: took 5.069821ms waiting for restarted kubelet to initialise ...
	I1031 00:13:28.407397  249055 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:13:28.413100  249055 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:26.174532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:28.667350  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:25.856078  248084 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1031 00:13:25.856136  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:25.913612  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:25.913733  248084 ssh_runner.go:195] Run: which lz4
	I1031 00:13:25.918632  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 00:13:25.923981  248084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 00:13:25.924014  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1031 00:13:27.712494  248084 crio.go:444] Took 1.793896 seconds to copy over tarball
	I1031 00:13:27.712615  248084 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 00:13:29.050835  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.549536  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.457173  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.255838  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:30.667667  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:33.167250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:31.207204  248084 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.494544747s)
	I1031 00:13:31.207238  248084 crio.go:451] Took 3.494710 seconds to extract the tarball
	I1031 00:13:31.207250  248084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 00:13:31.253648  248084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 00:13:31.312599  248084 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1031 00:13:31.312624  248084 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1031 00:13:31.312719  248084 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.312753  248084 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.312763  248084 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.312776  248084 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1031 00:13:31.312705  248084 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.313005  248084 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.313122  248084 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.312926  248084 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314301  248084 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.314408  248084 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.314826  248084 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.314863  248084 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.314835  248084 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.314877  248084 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.314888  248084 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.314904  248084 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1031 00:13:31.492117  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.493373  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.506179  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.506237  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1031 00:13:31.510547  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.515827  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.524137  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.614442  248084 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1031 00:13:31.614494  248084 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.614544  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.622661  248084 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1031 00:13:31.622718  248084 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.622770  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.630473  248084 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:13:31.674058  248084 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1031 00:13:31.674111  248084 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.674161  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.707251  248084 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1031 00:13:31.707293  248084 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1031 00:13:31.707337  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1031 00:13:31.719006  248084 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.719008  248084 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1031 00:13:31.718947  248084 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1031 00:13:31.719056  248084 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.719072  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719084  248084 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.719111  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719119  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1031 00:13:31.719139  248084 ssh_runner.go:195] Run: which crictl
	I1031 00:13:31.719176  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1031 00:13:31.866787  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1031 00:13:31.866815  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1031 00:13:31.866818  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1031 00:13:31.866883  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1031 00:13:31.866887  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1031 00:13:31.866936  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1031 00:13:31.867046  248084 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1031 00:13:31.993265  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1031 00:13:31.993505  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1031 00:13:31.993999  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1031 00:13:31.994045  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1031 00:13:31.994063  248084 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1031 00:13:31.994123  248084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999020  248084 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1031 00:13:31.999034  248084 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1031 00:13:31.999068  248084 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1031 00:13:33.460498  248084 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.461402246s)
	I1031 00:13:33.460530  248084 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1031 00:13:33.460582  248084 cache_images.go:92] LoadImages completed in 2.147945804s
	W1031 00:13:33.460661  248084 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17527-208817/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1031 00:13:33.460749  248084 ssh_runner.go:195] Run: crio config
	I1031 00:13:33.528812  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:33.528838  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:33.528865  248084 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 00:13:33.528895  248084 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.65 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-225140 NodeName:old-k8s-version-225140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1031 00:13:33.529103  248084 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-225140"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-225140
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.65:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 00:13:33.529205  248084 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-225140 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 00:13:33.529276  248084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1031 00:13:33.539328  248084 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 00:13:33.539424  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 00:13:33.551543  248084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1031 00:13:33.569095  248084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 00:13:33.586561  248084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1031 00:13:33.605084  248084 ssh_runner.go:195] Run: grep 192.168.72.65	control-plane.minikube.internal$ /etc/hosts
	I1031 00:13:33.609322  248084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 00:13:33.623527  248084 certs.go:56] Setting up /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140 for IP: 192.168.72.65
	I1031 00:13:33.623556  248084 certs.go:190] acquiring lock for shared ca certs: {Name:mk0af4cae440a8b63f5f4f696fa4a50605adb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:13:33.623768  248084 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key
	I1031 00:13:33.623817  248084 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key
	I1031 00:13:33.623919  248084 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.key
	I1031 00:13:33.624000  248084 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key.fa85241c
	I1031 00:13:33.624074  248084 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key
	I1031 00:13:33.624223  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem (1338 bytes)
	W1031 00:13:33.624267  248084 certs.go:433] ignoring /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005_empty.pem, impossibly tiny 0 bytes
	I1031 00:13:33.624285  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 00:13:33.624333  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem (1078 bytes)
	I1031 00:13:33.624377  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem (1123 bytes)
	I1031 00:13:33.624409  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/certs/home/jenkins/minikube-integration/17527-208817/.minikube/certs/key.pem (1679 bytes)
	I1031 00:13:33.624480  248084 certs.go:437] found cert: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem (1708 bytes)
	I1031 00:13:33.625311  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 00:13:33.648457  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1031 00:13:33.673383  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 00:13:33.701679  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 00:13:33.725823  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 00:13:33.748912  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 00:13:33.777397  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 00:13:33.803003  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 00:13:33.827749  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/certs/216005.pem --> /usr/share/ca-certificates/216005.pem (1338 bytes)
	I1031 00:13:33.850011  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/ssl/certs/2160052.pem --> /usr/share/ca-certificates/2160052.pem (1708 bytes)
	I1031 00:13:33.871722  248084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 00:13:33.894663  248084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 00:13:33.912130  248084 ssh_runner.go:195] Run: openssl version
	I1031 00:13:33.918010  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/216005.pem && ln -fs /usr/share/ca-certificates/216005.pem /etc/ssl/certs/216005.pem"
	I1031 00:13:33.928381  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933548  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 30 23:11 /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.933605  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/216005.pem
	I1031 00:13:33.939344  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/216005.pem /etc/ssl/certs/51391683.0"
	I1031 00:13:33.950844  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2160052.pem && ln -fs /usr/share/ca-certificates/2160052.pem /etc/ssl/certs/2160052.pem"
	I1031 00:13:33.962585  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968178  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 30 23:11 /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.968244  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2160052.pem
	I1031 00:13:33.975606  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2160052.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 00:13:33.986565  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 00:13:33.998188  248084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.003940  248084 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 30 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.004012  248084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 00:13:34.010088  248084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 00:13:34.022223  248084 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 00:13:34.028537  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 00:13:34.036319  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 00:13:34.043481  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 00:13:34.051269  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 00:13:34.058129  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 00:13:34.065473  248084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 00:13:34.072663  248084 kubeadm.go:404] StartCluster: {Name:old-k8s-version-225140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-225140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:13:34.072781  248084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1031 00:13:34.072830  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:34.121758  248084 cri.go:89] found id: ""
	I1031 00:13:34.121848  248084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 00:13:34.135357  248084 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 00:13:34.135392  248084 kubeadm.go:636] restartCluster start
	I1031 00:13:34.135469  248084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 00:13:34.145173  248084 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.146905  248084 kubeconfig.go:92] found "old-k8s-version-225140" server: "https://192.168.72.65:8443"
	I1031 00:13:34.150660  248084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 00:13:34.163037  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.163119  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.184414  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.184441  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.184586  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.197787  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:34.698120  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:34.698246  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:34.710874  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.198312  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.198384  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.210933  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:35.698108  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:35.698210  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:35.710184  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:33.551354  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.048781  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.442171  249055 pod_ready.go:102] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.941322  249055 pod_ready.go:92] pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:36.941344  249055 pod_ready.go:81] duration metric: took 8.528221711s waiting for pod "coredns-5dd5756b68-6sbs7" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:36.941353  249055 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:38.959679  249055 pod_ready.go:102] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:35.168250  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:37.666699  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:36.198699  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.198787  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.211005  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:36.698612  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:36.698705  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:36.712106  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.198674  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.198779  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.211665  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:37.698160  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:37.698258  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:37.709798  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.198294  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.198410  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.210400  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.697965  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:38.698058  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:38.710188  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.198306  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.198435  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.210213  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:39.698867  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:39.698944  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:39.709958  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.198113  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.198217  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.209265  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:40.698424  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:40.698494  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:40.715194  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:38.548167  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.047378  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:39.959598  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.959625  249055 pod_ready.go:81] duration metric: took 3.018261782s waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.959638  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965182  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.965204  249055 pod_ready.go:81] duration metric: took 5.558563ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.965218  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970258  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.970283  249055 pod_ready.go:81] duration metric: took 5.058027ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.970293  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975183  249055 pod_ready.go:92] pod "kube-proxy-jn2j8" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:39.975202  249055 pod_ready.go:81] duration metric: took 4.903272ms waiting for pod "kube-proxy-jn2j8" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:39.975209  249055 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137875  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:13:40.137907  249055 pod_ready.go:81] duration metric: took 162.69035ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:40.137921  249055 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	I1031 00:13:42.452793  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:40.167385  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:42.666396  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:41.198534  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.198640  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.210412  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:41.698420  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:41.698526  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:41.710324  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.198572  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.198649  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.210399  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:42.697932  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:42.698010  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:42.711010  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.198096  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.198182  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.209468  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:43.698864  248084 api_server.go:166] Checking apiserver status ...
	I1031 00:13:43.698998  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 00:13:43.710735  248084 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 00:13:44.163493  248084 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 00:13:44.163545  248084 kubeadm.go:1128] stopping kube-system containers ...
	I1031 00:13:44.163560  248084 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1031 00:13:44.163621  248084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 00:13:44.204352  248084 cri.go:89] found id: ""
	I1031 00:13:44.204444  248084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 00:13:44.219641  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:13:44.228342  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:13:44.228420  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237058  248084 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 00:13:44.237081  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:44.369926  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.077715  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.306025  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.399572  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:45.537955  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:13:45.538046  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:45.554284  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:43.549424  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.052253  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:44.947118  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.954020  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:45.167622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:47.669895  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:46.073056  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:46.572408  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.072392  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:13:47.098617  248084 api_server.go:72] duration metric: took 1.560662194s to wait for apiserver process to appear ...
	I1031 00:13:47.098650  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:13:47.098673  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:48.547476  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:50.547537  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:49.446620  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:51.946346  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:53.949089  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.098997  248084 api_server.go:269] stopped: https://192.168.72.65:8443/healthz: Get "https://192.168.72.65:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1031 00:13:52.099073  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:52.709441  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 00:13:52.709490  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 00:13:53.210178  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.216374  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.216403  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:53.709935  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:53.717326  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1031 00:13:53.717361  248084 api_server.go:103] status: https://192.168.72.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1031 00:13:54.209883  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:13:54.215985  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:13:54.224088  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:13:54.224115  248084 api_server.go:131] duration metric: took 7.125456227s to wait for apiserver health ...
	I1031 00:13:54.224127  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:13:54.224135  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:13:54.226152  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:13:50.168563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:52.669900  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.227723  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:13:54.239709  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:13:54.261391  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:13:54.273728  248084 system_pods.go:59] 7 kube-system pods found
	I1031 00:13:54.273761  248084 system_pods.go:61] "coredns-5644d7b6d9-2s6pc" [c77d23a4-28d0-4bbf-bb28-baff23fc4987] Running
	I1031 00:13:54.273775  248084 system_pods.go:61] "etcd-old-k8s-version-225140" [dcc629ce-f107-4d14-b69b-20228b00b7c5] Running
	I1031 00:13:54.273783  248084 system_pods.go:61] "kube-apiserver-old-k8s-version-225140" [38fd683e-51fa-40f0-a3c6-afdf57e14132] Running
	I1031 00:13:54.273791  248084 system_pods.go:61] "kube-controller-manager-old-k8s-version-225140" [29b1b9cb-1819-497e-b0f9-c008b0ac6e26] Running
	I1031 00:13:54.273803  248084 system_pods.go:61] "kube-proxy-fxz8t" [57ccd26e-cbcf-4ed3-adbe-778fd8bcf27c] Running
	I1031 00:13:54.273811  248084 system_pods.go:61] "kube-scheduler-old-k8s-version-225140" [d8d4d75c-25f8-4485-853c-8fa75105c6e2] Running
	I1031 00:13:54.273818  248084 system_pods.go:61] "storage-provisioner" [8fc76055-6a96-4884-8f91-b2d3f598bc88] Running
	I1031 00:13:54.273826  248084 system_pods.go:74] duration metric: took 12.417629ms to wait for pod list to return data ...
	I1031 00:13:54.273840  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:13:54.279056  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:13:54.279082  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:13:54.279094  248084 node_conditions.go:105] duration metric: took 5.248504ms to run NodePressure ...
	I1031 00:13:54.279111  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 00:13:54.594257  248084 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1031 00:13:54.600279  248084 retry.go:31] will retry after 287.663167ms: kubelet not initialised
	I1031 00:13:54.899142  248084 retry.go:31] will retry after 297.826066ms: kubelet not initialised
	I1031 00:13:55.205347  248084 retry.go:31] will retry after 797.709551ms: kubelet not initialised
	I1031 00:13:52.548142  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:54.548667  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.047942  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.446395  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:58.946167  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:55.167909  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:57.668179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:59.668339  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:13:56.009099  248084 retry.go:31] will retry after 571.448668ms: kubelet not initialised
	I1031 00:13:56.593388  248084 retry.go:31] will retry after 1.82270665s: kubelet not initialised
	I1031 00:13:58.421789  248084 retry.go:31] will retry after 1.094040234s: kubelet not initialised
	I1031 00:13:59.522021  248084 retry.go:31] will retry after 3.716569913s: kubelet not initialised
	I1031 00:13:59.549278  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.551103  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.446913  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.947203  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:01.668422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.668478  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:03.244381  248084 retry.go:31] will retry after 4.104024564s: kubelet not initialised
	I1031 00:14:04.048498  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.548070  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.447864  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.945886  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:06.166653  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:08.167008  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:07.354371  248084 retry.go:31] will retry after 9.18347873s: kubelet not initialised
	I1031 00:14:09.047421  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.048479  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:11.448689  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.948268  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:10.667348  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:12.667812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:13.052934  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.547846  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.446625  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:18.447872  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:15.167259  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:17.666670  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:19.667251  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:16.544997  248084 retry.go:31] will retry after 8.29261189s: kubelet not initialised
	I1031 00:14:17.550692  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.045758  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:22.047516  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:20.946805  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:23.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:21.667436  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.167210  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:24.843011  248084 retry.go:31] will retry after 15.309414425s: kubelet not initialised
	I1031 00:14:24.048197  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.546847  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:25.946796  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:27.950212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:26.167443  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.168482  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:28.548116  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:31.047187  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.446164  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.451487  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:30.666762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:32.667234  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:33.049216  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.545964  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:34.946961  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:36.947212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:38.949437  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:35.167751  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:37.668981  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:39.669233  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.157618  248084 kubeadm.go:787] kubelet initialised
	I1031 00:14:40.157647  248084 kubeadm.go:788] duration metric: took 45.563360213s waiting for restarted kubelet to initialise ...
	I1031 00:14:40.157660  248084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:14:40.163372  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169776  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.169798  248084 pod_ready.go:81] duration metric: took 6.398827ms waiting for pod "coredns-5644d7b6d9-2s6pc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.169806  248084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175023  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.175047  248084 pod_ready.go:81] duration metric: took 5.233827ms waiting for pod "coredns-5644d7b6d9-b6lnc" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.175058  248084 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179248  248084 pod_ready.go:92] pod "etcd-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.179269  248084 pod_ready.go:81] duration metric: took 4.202967ms waiting for pod "etcd-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.179279  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183579  248084 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.183593  248084 pod_ready.go:81] duration metric: took 4.308627ms waiting for pod "kube-apiserver-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.183604  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558275  248084 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.558308  248084 pod_ready.go:81] duration metric: took 374.694908ms waiting for pod "kube-controller-manager-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.558321  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:37.547289  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.047586  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:41.446752  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:43.447874  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.166207  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:44.167277  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:40.958069  248084 pod_ready.go:92] pod "kube-proxy-fxz8t" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:40.958099  248084 pod_ready.go:81] duration metric: took 399.768399ms waiting for pod "kube-proxy-fxz8t" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:40.958112  248084 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358244  248084 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace has status "Ready":"True"
	I1031 00:14:41.358274  248084 pod_ready.go:81] duration metric: took 400.15381ms waiting for pod "kube-scheduler-old-k8s-version-225140" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:41.358284  248084 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	I1031 00:14:43.666594  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.666948  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:42.547950  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.047306  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:45.946510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.946663  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:46.167952  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.667854  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:48.166448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.167022  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:47.547211  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:49.548100  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.548509  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:50.446801  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.447233  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:51.168676  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.667170  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:52.666608  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.667583  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:53.550528  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:56.050177  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:54.947677  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.447082  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:55.669616  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.170640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:57.165612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.168165  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:58.548441  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.047296  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:14:59.447626  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.947292  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:00.669772  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:01.665706  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.166609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:03.546708  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.547092  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:04.447672  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.449541  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:08.948333  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:05.667422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.669173  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:06.666325  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.165998  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:07.547133  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:09.547568  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.551676  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.446875  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.946673  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:10.168209  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:12.666973  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.668147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:11.166824  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:13.665410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:14.046068  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.047803  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:15.946975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.445704  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:17.167480  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:19.668157  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:16.165876  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.166620  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.666455  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:18.549666  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:21.046823  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:20.447212  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.947109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.167144  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.168041  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:22.667076  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.167164  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:23.047419  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:25.049728  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:24.947312  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.449246  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:26.669861  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.168519  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.666465  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.166123  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:27.547889  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:30.046604  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.048045  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:29.948497  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.446948  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:31.670479  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.167604  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:32.668009  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:35.165749  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.547533  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.048031  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:34.945337  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.947811  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:36.168180  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:38.170343  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:37.168053  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.665709  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.552108  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.047262  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:39.451699  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.946296  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:40.667428  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:42.668235  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:41.666624  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.166672  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.047729  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.549442  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:44.447109  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.448250  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:48.947017  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:45.167138  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:47.666886  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.667907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:46.669428  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.166194  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:49.047526  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.049047  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:50.947410  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.446734  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:52.167771  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:54.167875  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:51.666228  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.667295  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:53.052036  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.547767  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:55.946776  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.446825  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.668562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:59.168110  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:56.167716  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.665487  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.668666  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:15:58.047770  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.047908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.048356  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:00.946590  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:02.947001  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:01.667160  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.167375  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:03.165171  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.166289  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:04.049788  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.547020  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:05.446511  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.449772  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:06.667622  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:08.667665  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:07.166410  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.166536  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.049966  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.547967  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:09.947975  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:12.447789  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.168645  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667838  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:11.665962  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:13.667117  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:15.667752  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.047716  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.048052  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:14.947264  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.947386  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:16.167045  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.668483  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:17.669275  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.167079  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:18.548369  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:20.548635  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:19.448662  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.947615  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:21.167164  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.167506  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:22.666820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.166614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:23.046392  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.548954  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:24.446814  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:26.945792  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:28.947133  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:25.167732  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.168662  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.171362  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.169221  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:29.667206  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:27.550807  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:30.048391  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.448249  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.946336  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:31.667185  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:33.667628  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.165207  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:34.166237  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:32.546558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.046558  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:37.047654  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.946896  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.449959  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:35.668366  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.168509  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:36.166529  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:38.666448  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:39.552154  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.046335  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.946962  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.446383  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:40.666758  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:42.668031  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:41.168643  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:43.170216  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.666959  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:44.046908  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:46.548312  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.947573  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.947914  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:45.166562  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667578  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:47.667903  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.166574  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.046763  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:51.047566  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:49.948510  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.446760  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:50.168646  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.667122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.668132  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:52.168815  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.667713  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:53.546751  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:56.048217  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:54.947315  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.447727  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.169330  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.666819  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:57.166002  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.168109  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:58.548212  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.047033  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:16:59.448330  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.946970  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.667755  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167493  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:01.666457  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.167186  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:03.546842  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.547488  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:04.445743  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:06.446624  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.451015  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:05.644115  248387 pod_ready.go:81] duration metric: took 4m0.000125657s waiting for pod "metrics-server-57f55c9bc5-nm8dj" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:05.644148  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:05.644168  248387 pod_ready.go:38] duration metric: took 4m9.241022532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:05.644198  248387 kubeadm.go:640] restartCluster took 4m28.058055798s
	W1031 00:17:05.644570  248387 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:05.644685  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:06.168910  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.666612  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:08.047998  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.547186  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:10.946940  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.455539  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:11.168678  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.667122  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.046682  248718 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:13.240656  248718 pod_ready.go:81] duration metric: took 4m0.001083426s waiting for pod "metrics-server-57f55c9bc5-pm6qx" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:13.240702  248718 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:13.240712  248718 pod_ready.go:38] duration metric: took 4m0.801552437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:13.240732  248718 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:17:13.240766  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:13.240930  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:13.307072  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.307099  248718 cri.go:89] found id: ""
	I1031 00:17:13.307108  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:13.307180  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.312997  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:13.313067  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:13.364439  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:13.364474  248718 cri.go:89] found id: ""
	I1031 00:17:13.364485  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:13.364561  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.370120  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:13.370186  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:13.413937  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.413972  248718 cri.go:89] found id: ""
	I1031 00:17:13.413983  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:13.414051  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.420586  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:13.420669  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:13.476980  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:13.477008  248718 cri.go:89] found id: ""
	I1031 00:17:13.477028  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:13.477100  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.482874  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:13.482957  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:13.532196  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.532232  248718 cri.go:89] found id: ""
	I1031 00:17:13.532244  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:13.532314  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.539868  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:13.540017  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:13.595189  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:13.595218  248718 cri.go:89] found id: ""
	I1031 00:17:13.595231  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:13.595305  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.601429  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:13.601496  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:13.641957  248718 cri.go:89] found id: ""
	I1031 00:17:13.641984  248718 logs.go:284] 0 containers: []
	W1031 00:17:13.641992  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:13.641998  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:13.642053  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:13.683163  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.683193  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:13.683200  248718 cri.go:89] found id: ""
	I1031 00:17:13.683209  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:13.683266  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.689222  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:13.693814  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:13.693839  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:13.710167  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:13.710188  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:13.754241  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:13.754273  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:13.800473  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:13.800508  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:13.857072  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:13.857101  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:13.901072  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:13.901102  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:14.390850  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:14.390894  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:14.446107  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:14.446141  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:14.495337  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:14.495368  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:14.535558  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:14.535591  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:14.589637  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:14.589676  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:14.650509  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:14.650559  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:14.816331  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:14.816362  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.363336  248718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:17:17.378105  248718 api_server.go:72] duration metric: took 4m12.292425365s to wait for apiserver process to appear ...
	I1031 00:17:17.378131  248718 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:17:17.378171  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:17.378234  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:17.424054  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:17.424082  248718 cri.go:89] found id: ""
	I1031 00:17:17.424091  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:17.424152  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.428185  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:17.428246  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:17.465132  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:17.465157  248718 cri.go:89] found id: ""
	I1031 00:17:17.465167  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:17.465219  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.469315  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:17.469392  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:17.504119  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:17.504140  248718 cri.go:89] found id: ""
	I1031 00:17:17.504151  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:17.504199  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:15.946464  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:17.949398  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:19.822838  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.178119551s)
	I1031 00:17:19.822927  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:19.838182  248387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:19.847738  248387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:19.857883  248387 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:19.857939  248387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:19.911372  248387 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:19.911432  248387 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:20.091412  248387 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:20.091582  248387 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:20.091703  248387 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:20.351519  248387 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:16.166533  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:18.668258  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:20.353310  248387 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:20.353500  248387 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:20.353598  248387 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:20.353712  248387 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:20.353809  248387 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:20.353933  248387 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:20.354050  248387 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:20.354132  248387 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:20.354241  248387 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:20.354353  248387 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:20.354596  248387 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:20.355193  248387 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:20.355332  248387 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:21.009329  248387 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:21.145431  248387 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:21.231013  248387 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:21.384423  248387 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:21.385066  248387 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:21.387895  248387 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:17.508240  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:17.510213  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:17.548666  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:17.548692  248718 cri.go:89] found id: ""
	I1031 00:17:17.548702  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:17.548768  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.552963  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:17.553029  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:17.593690  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:17.593728  248718 cri.go:89] found id: ""
	I1031 00:17:17.593739  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:17.593808  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.598269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:17.598325  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:17.637723  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:17.637750  248718 cri.go:89] found id: ""
	I1031 00:17:17.637761  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:17.637826  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.642006  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:17.642055  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:17.686659  248718 cri.go:89] found id: ""
	I1031 00:17:17.686687  248718 logs.go:284] 0 containers: []
	W1031 00:17:17.686695  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:17.686701  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:17.686766  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:17.732114  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:17.732147  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:17.732154  248718 cri.go:89] found id: ""
	I1031 00:17:17.732163  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:17.732232  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.737308  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:17.741981  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:17.742013  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:18.181024  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:18.181062  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:18.196483  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:18.196519  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:18.235422  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:18.235458  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:18.291366  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:18.291402  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:18.412906  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:18.412960  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:18.469631  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:18.469669  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:18.523997  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:18.524034  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:18.566490  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:18.566520  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:18.626106  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:18.626138  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:18.666341  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:18.666382  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:18.729380  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:18.729430  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:18.788148  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:18.788182  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.330782  248718 api_server.go:253] Checking apiserver healthz at https://192.168.50.2:8443/healthz ...
	I1031 00:17:21.338085  248718 api_server.go:279] https://192.168.50.2:8443/healthz returned 200:
	ok
	I1031 00:17:21.339623  248718 api_server.go:141] control plane version: v1.28.3
	I1031 00:17:21.339671  248718 api_server.go:131] duration metric: took 3.961531332s to wait for apiserver health ...
	I1031 00:17:21.339684  248718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:17:21.339718  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:17:21.339786  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:17:21.380659  248718 cri.go:89] found id: "bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:21.380687  248718 cri.go:89] found id: ""
	I1031 00:17:21.380696  248718 logs.go:284] 1 containers: [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033]
	I1031 00:17:21.380760  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.385559  248718 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:17:21.385626  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:17:21.431810  248718 cri.go:89] found id: "35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:21.431841  248718 cri.go:89] found id: ""
	I1031 00:17:21.431851  248718 logs.go:284] 1 containers: [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6]
	I1031 00:17:21.431914  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.436489  248718 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:17:21.436562  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:17:21.489003  248718 cri.go:89] found id: "8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.489036  248718 cri.go:89] found id: ""
	I1031 00:17:21.489047  248718 logs.go:284] 1 containers: [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26]
	I1031 00:17:21.489109  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.493691  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:17:21.493765  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:17:21.533480  248718 cri.go:89] found id: "ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:21.533507  248718 cri.go:89] found id: ""
	I1031 00:17:21.533518  248718 logs.go:284] 1 containers: [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80]
	I1031 00:17:21.533584  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.538269  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:17:21.538358  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:17:21.589588  248718 cri.go:89] found id: "f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:21.589621  248718 cri.go:89] found id: ""
	I1031 00:17:21.589632  248718 logs.go:284] 1 containers: [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3]
	I1031 00:17:21.589705  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.595927  248718 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:17:21.596020  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:17:21.644705  248718 cri.go:89] found id: "4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:21.644730  248718 cri.go:89] found id: ""
	I1031 00:17:21.644738  248718 logs.go:284] 1 containers: [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70]
	I1031 00:17:21.644797  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.649696  248718 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:17:21.649762  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:17:21.696655  248718 cri.go:89] found id: ""
	I1031 00:17:21.696692  248718 logs.go:284] 0 containers: []
	W1031 00:17:21.696703  248718 logs.go:286] No container was found matching "kindnet"
	I1031 00:17:21.696711  248718 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:17:21.696788  248718 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:17:21.743499  248718 cri.go:89] found id: "86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:21.743523  248718 cri.go:89] found id: "622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:21.743528  248718 cri.go:89] found id: ""
	I1031 00:17:21.743535  248718 logs.go:284] 2 containers: [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c]
	I1031 00:17:21.743586  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.748625  248718 ssh_runner.go:195] Run: which crictl
	I1031 00:17:21.753187  248718 logs.go:123] Gathering logs for dmesg ...
	I1031 00:17:21.753223  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:17:21.768074  248718 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:17:21.768115  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:17:21.913742  248718 logs.go:123] Gathering logs for coredns [8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26] ...
	I1031 00:17:21.913782  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e049ebc03e124889f3303f206128b548a992cc64990d83144b7fd9d8c3a2a26"
	I1031 00:17:21.966345  248718 logs.go:123] Gathering logs for storage-provisioner [622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c] ...
	I1031 00:17:21.966394  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 622298cd36157636d591a31a79e6620f1b72fc95e35f2336bacf84f5bbe8812c"
	I1031 00:17:22.004823  248718 logs.go:123] Gathering logs for container status ...
	I1031 00:17:22.004857  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:17:22.059117  248718 logs.go:123] Gathering logs for etcd [35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6] ...
	I1031 00:17:22.059147  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35bf5adca8564a77985f4b81710754cfe9a5643d693b75c683e72cfba29a6cb6"
	I1031 00:17:22.117615  248718 logs.go:123] Gathering logs for kube-scheduler [ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80] ...
	I1031 00:17:22.117655  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee4cc3844ed36aeb4168bde6cba3455fd832e0204cb6e4b2000f53fc8f81ac80"
	I1031 00:17:22.160231  248718 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:17:22.160275  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:17:20.445730  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:22.447412  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:21.390006  248387 out.go:204]   - Booting up control plane ...
	I1031 00:17:21.390170  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:21.390275  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:21.391130  248387 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:21.408062  248387 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:21.409190  248387 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:21.409256  248387 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:21.565150  248387 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:22.536881  248718 logs.go:123] Gathering logs for kube-apiserver [bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033] ...
	I1031 00:17:22.536920  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb31ab0db497fceec8e51b1fd4c7996dcf5720451b2a0ec239857d1997c5c033"
	I1031 00:17:22.591993  248718 logs.go:123] Gathering logs for kube-proxy [f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3] ...
	I1031 00:17:22.592030  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f52fe11ae842299694fb7907bc48d97bb16ceb24b1c534ff56e53aa823afe2c3"
	I1031 00:17:22.644262  248718 logs.go:123] Gathering logs for storage-provisioner [86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3] ...
	I1031 00:17:22.644302  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86e0b59eda801ddd9694bb90f809c32b9098f19f023018e386bfad074a86b2c3"
	I1031 00:17:22.688848  248718 logs.go:123] Gathering logs for kubelet ...
	I1031 00:17:22.688880  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1031 00:17:22.740390  248718 logs.go:123] Gathering logs for kube-controller-manager [4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70] ...
	I1031 00:17:22.740440  248718 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4622dc85f388295698cbe1b91ca8889d7ed0cb7a4d96aabe766fa32e64880c70"
	I1031 00:17:25.317640  248718 system_pods.go:59] 8 kube-system pods found
	I1031 00:17:25.317675  248718 system_pods.go:61] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.317682  248718 system_pods.go:61] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.317690  248718 system_pods.go:61] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.317696  248718 system_pods.go:61] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.317702  248718 system_pods.go:61] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.317709  248718 system_pods.go:61] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.317718  248718 system_pods.go:61] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.317728  248718 system_pods.go:61] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.317737  248718 system_pods.go:74] duration metric: took 3.978040466s to wait for pod list to return data ...
	I1031 00:17:25.317752  248718 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:17:25.320120  248718 default_sa.go:45] found service account: "default"
	I1031 00:17:25.320147  248718 default_sa.go:55] duration metric: took 2.387709ms for default service account to be created ...
	I1031 00:17:25.320156  248718 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:17:25.325979  248718 system_pods.go:86] 8 kube-system pods found
	I1031 00:17:25.326004  248718 system_pods.go:89] "coredns-5dd5756b68-dqrs4" [f6d80a09-c397-4c78-a038-f07cad11de9c] Running
	I1031 00:17:25.326009  248718 system_pods.go:89] "etcd-embed-certs-078843" [2dd3d20f-1309-4ec9-ab75-6b00cadc5827] Running
	I1031 00:17:25.326014  248718 system_pods.go:89] "kube-apiserver-embed-certs-078843" [6a41123e-11a9-4aff-8f78-802b8f59a1bb] Running
	I1031 00:17:25.326018  248718 system_pods.go:89] "kube-controller-manager-embed-certs-078843" [9ccb551e-3e3f-4cdc-991e-65b41febf105] Running
	I1031 00:17:25.326022  248718 system_pods.go:89] "kube-proxy-287dq" [c9c3a3a9-ff79-4cd8-ab26-a4ca2bec1fd9] Running
	I1031 00:17:25.326025  248718 system_pods.go:89] "kube-scheduler-embed-certs-078843" [13a0f095-b945-437c-a7ef-929739bfcb01] Running
	I1031 00:17:25.326055  248718 system_pods.go:89] "metrics-server-57f55c9bc5-pm6qx" [5ed61015-eb88-4381-adc3-8d1f4021c6aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:17:25.326079  248718 system_pods.go:89] "storage-provisioner" [6bce0572-aad8-4a9f-978f-9bd0ff62904a] Running
	I1031 00:17:25.326088  248718 system_pods.go:126] duration metric: took 5.92719ms to wait for k8s-apps to be running ...
	I1031 00:17:25.326097  248718 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:17:25.326148  248718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:25.342753  248718 system_svc.go:56] duration metric: took 16.646026ms WaitForService to wait for kubelet.
	I1031 00:17:25.342775  248718 kubeadm.go:581] duration metric: took 4m20.257105243s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:17:25.342793  248718 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:17:25.348257  248718 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:17:25.348315  248718 node_conditions.go:123] node cpu capacity is 2
	I1031 00:17:25.348379  248718 node_conditions.go:105] duration metric: took 5.579398ms to run NodePressure ...
	I1031 00:17:25.348413  248718 start.go:228] waiting for startup goroutines ...
	I1031 00:17:25.348426  248718 start.go:233] waiting for cluster config update ...
	I1031 00:17:25.348440  248718 start.go:242] writing updated cluster config ...
	I1031 00:17:25.349022  248718 ssh_runner.go:195] Run: rm -f paused
	I1031 00:17:25.415112  248718 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:17:25.418179  248718 out.go:177] * Done! kubectl is now configured to use "embed-certs-078843" cluster and "default" namespace by default
	I1031 00:17:21.166338  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:23.666609  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:24.447530  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:26.947352  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:29.570822  248387 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004974 seconds
	I1031 00:17:29.570964  248387 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:17:29.587033  248387 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:17:30.119470  248387 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:17:30.119696  248387 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-640155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:17:30.635312  248387 kubeadm.go:322] [bootstrap-token] Using token: cwaa4b.bqwxrocs0j7ngn44
	I1031 00:17:26.166271  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:28.664576  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.664963  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:30.636717  248387 out.go:204]   - Configuring RBAC rules ...
	I1031 00:17:30.636873  248387 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:17:30.642895  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:17:30.651729  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:17:30.655472  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:17:30.659228  248387 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:17:30.668748  248387 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:17:30.690255  248387 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:17:30.950445  248387 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:17:31.051453  248387 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:17:31.051475  248387 kubeadm.go:322] 
	I1031 00:17:31.051536  248387 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:17:31.051583  248387 kubeadm.go:322] 
	I1031 00:17:31.051709  248387 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:17:31.051728  248387 kubeadm.go:322] 
	I1031 00:17:31.051767  248387 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:17:31.051843  248387 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:17:31.051930  248387 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:17:31.051943  248387 kubeadm.go:322] 
	I1031 00:17:31.052013  248387 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:17:31.052024  248387 kubeadm.go:322] 
	I1031 00:17:31.052104  248387 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:17:31.052130  248387 kubeadm.go:322] 
	I1031 00:17:31.052191  248387 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:17:31.052280  248387 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:17:31.052375  248387 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:17:31.052383  248387 kubeadm.go:322] 
	I1031 00:17:31.052485  248387 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:17:31.052578  248387 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:17:31.052612  248387 kubeadm.go:322] 
	I1031 00:17:31.052744  248387 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.052900  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:17:31.052957  248387 kubeadm.go:322] 	--control-plane 
	I1031 00:17:31.052969  248387 kubeadm.go:322] 
	I1031 00:17:31.053092  248387 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:17:31.053107  248387 kubeadm.go:322] 
	I1031 00:17:31.053217  248387 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cwaa4b.bqwxrocs0j7ngn44 \
	I1031 00:17:31.053359  248387 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:17:31.053517  248387 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:17:31.053540  248387 cni.go:84] Creating CNI manager for ""
	I1031 00:17:31.053552  248387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:17:31.055477  248387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:17:29.447694  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.449117  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:33.947759  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:31.056845  248387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:17:31.095104  248387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:17:31.131198  248387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:17:31.131322  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.131337  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=no-preload-640155 minikube.k8s.io/updated_at=2023_10_31T00_17_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.581951  248387 ops.go:34] apiserver oom_adj: -16
	I1031 00:17:31.582010  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:31.741330  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.350182  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.850643  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.350205  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:33.850216  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.349583  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:34.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:32.666281  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.168579  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:36.449644  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:38.946898  249055 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:35.350661  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:35.850301  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.349673  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:36.849749  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.349755  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.850628  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.350204  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:38.849697  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.350194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:39.850027  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:37.667083  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.166305  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:40.349747  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:40.850194  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.350476  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:41.850214  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.350555  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:42.850295  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.350645  248387 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:17:43.679529  248387 kubeadm.go:1081] duration metric: took 12.548274555s to wait for elevateKubeSystemPrivileges.
	I1031 00:17:43.679561  248387 kubeadm.go:406] StartCluster complete in 5m6.156207823s
	I1031 00:17:43.679585  248387 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.679674  248387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:17:43.682045  248387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:17:43.684483  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:17:43.684785  248387 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:17:43.684856  248387 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:17:43.684927  248387 addons.go:69] Setting storage-provisioner=true in profile "no-preload-640155"
	I1031 00:17:43.685036  248387 addons.go:231] Setting addon storage-provisioner=true in "no-preload-640155"
	W1031 00:17:43.685063  248387 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:17:43.685159  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685323  248387 addons.go:69] Setting metrics-server=true in profile "no-preload-640155"
	I1031 00:17:43.685339  248387 addons.go:231] Setting addon metrics-server=true in "no-preload-640155"
	W1031 00:17:43.685356  248387 addons.go:240] addon metrics-server should already be in state true
	I1031 00:17:43.685395  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.685653  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685706  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.685893  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.685978  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.686168  248387 addons.go:69] Setting default-storageclass=true in profile "no-preload-640155"
	I1031 00:17:43.686191  248387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-640155"
	I1031 00:17:43.686545  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.686651  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.705002  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1031 00:17:43.705181  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1031 00:17:43.705556  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706410  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.706515  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.706543  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.706893  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I1031 00:17:43.706968  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.707139  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.707141  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.707157  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.707503  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.708166  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.708183  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.708236  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.708752  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.708783  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.709044  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.709715  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.709762  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.711511  248387 addons.go:231] Setting addon default-storageclass=true in "no-preload-640155"
	W1031 00:17:43.711525  248387 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:17:43.711553  248387 host.go:66] Checking if "no-preload-640155" exists ...
	I1031 00:17:43.711887  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.711927  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.730687  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1031 00:17:43.731513  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.732184  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.732205  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.732737  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.733201  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.734567  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I1031 00:17:43.734708  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I1031 00:17:43.735166  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.735665  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.735687  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.736245  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.736325  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.736490  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.736559  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.737461  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.739478  248387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:17:43.737480  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.738913  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.741138  248387 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.741154  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:17:43.741176  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.742564  248387 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:17:43.741663  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.744300  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:17:43.744312  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:17:43.744326  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.744413  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.745065  248387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:17:43.745106  248387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:17:43.753076  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753082  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753110  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.753196  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753200  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.753235  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753249  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.753282  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753376  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.753469  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753527  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.753624  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.753739  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.770481  248387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I1031 00:17:43.770925  248387 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:17:43.773191  248387 main.go:141] libmachine: Using API Version  1
	I1031 00:17:43.773223  248387 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:17:43.773636  248387 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:17:43.773840  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetState
	I1031 00:17:43.775633  248387 main.go:141] libmachine: (no-preload-640155) Calling .DriverName
	I1031 00:17:43.775954  248387 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:43.775969  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:17:43.775988  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHHostname
	I1031 00:17:43.778552  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.778797  248387 main.go:141] libmachine: (no-preload-640155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a4:c2", ip: ""} in network mk-no-preload-640155: {Iface:virbr4 ExpiryTime:2023-10-31 01:03:17 +0000 UTC Type:0 Mac:52:54:00:bd:a4:c2 Iaid: IPaddr:192.168.61.168 Prefix:24 Hostname:no-preload-640155 Clientid:01:52:54:00:bd:a4:c2}
	I1031 00:17:43.778823  248387 main.go:141] libmachine: (no-preload-640155) DBG | domain no-preload-640155 has defined IP address 192.168.61.168 and MAC address 52:54:00:bd:a4:c2 in network mk-no-preload-640155
	I1031 00:17:43.779021  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHPort
	I1031 00:17:43.779204  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHKeyPath
	I1031 00:17:43.779386  248387 main.go:141] libmachine: (no-preload-640155) Calling .GetSSHUsername
	I1031 00:17:43.779683  248387 sshutil.go:53] new ssh client: &{IP:192.168.61.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/no-preload-640155/id_rsa Username:docker}
	I1031 00:17:43.936171  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:17:43.958064  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:17:43.958098  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:17:43.967116  248387 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-640155" context rescaled to 1 replicas
	I1031 00:17:43.967170  248387 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.168 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:17:43.969408  248387 out.go:177] * Verifying Kubernetes components...
	I1031 00:17:40.138062  249055 pod_ready.go:81] duration metric: took 4m0.000119587s waiting for pod "metrics-server-57f55c9bc5-7klqw" in "kube-system" namespace to be "Ready" ...
	E1031 00:17:40.138098  249055 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:17:40.138122  249055 pod_ready.go:38] duration metric: took 4m11.730710605s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:40.138164  249055 kubeadm.go:640] restartCluster took 4m31.295508075s
	W1031 00:17:40.138262  249055 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:17:40.138297  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:17:43.970897  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:43.997796  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:17:44.038710  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:17:44.038738  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:17:44.075299  248387 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:44.075333  248387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:17:44.084795  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:17:44.172770  248387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:17:42.670020  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:45.165914  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:46.365906  248387 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.39492875s)
	I1031 00:17:46.365968  248387 node_ready.go:35] waiting up to 6m0s for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.365998  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.368158747s)
	I1031 00:17:46.366066  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366074  248387 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.281185782s)
	I1031 00:17:46.366103  248387 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1031 00:17:46.366086  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366354  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.430149836s)
	I1031 00:17:46.366390  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366402  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366600  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366612  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366622  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366631  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.366682  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.366732  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.366742  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.366751  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.366761  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.368921  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.368922  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.368958  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.369248  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.369293  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.369307  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.375988  248387 node_ready.go:49] node "no-preload-640155" has status "Ready":"True"
	I1031 00:17:46.376021  248387 node_ready.go:38] duration metric: took 10.036603ms waiting for node "no-preload-640155" to be "Ready" ...
	I1031 00:17:46.376036  248387 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:17:46.401563  248387 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:46.425939  248387 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.253121961s)
	I1031 00:17:46.426019  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.426035  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427461  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427471  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427488  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427498  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.427508  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.427894  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.427943  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.427954  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.427971  248387 addons.go:467] Verifying addon metrics-server=true in "no-preload-640155"
	I1031 00:17:46.436605  248387 main.go:141] libmachine: Making call to close driver server
	I1031 00:17:46.436630  248387 main.go:141] libmachine: (no-preload-640155) Calling .Close
	I1031 00:17:46.436927  248387 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:17:46.436959  248387 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:17:46.436987  248387 main.go:141] libmachine: (no-preload-640155) DBG | Closing plugin on server side
	I1031 00:17:46.438529  248387 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1031 00:17:46.439869  248387 addons.go:502] enable addons completed in 2.755015847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1031 00:17:48.527903  248387 pod_ready.go:92] pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.527939  248387 pod_ready.go:81] duration metric: took 2.126335033s waiting for pod "coredns-5dd5756b68-gp6pj" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.527954  248387 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544043  248387 pod_ready.go:92] pod "etcd-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.544070  248387 pod_ready.go:81] duration metric: took 16.106665ms waiting for pod "etcd-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.544085  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552043  248387 pod_ready.go:92] pod "kube-apiserver-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.552075  248387 pod_ready.go:81] duration metric: took 7.981099ms waiting for pod "kube-apiserver-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.552092  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563073  248387 pod_ready.go:92] pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.563112  248387 pod_ready.go:81] duration metric: took 11.009619ms waiting for pod "kube-controller-manager-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.563128  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771051  248387 pod_ready.go:92] pod "kube-proxy-pkjsl" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:48.771080  248387 pod_ready.go:81] duration metric: took 207.944354ms waiting for pod "kube-proxy-pkjsl" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:48.771090  248387 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170323  248387 pod_ready.go:92] pod "kube-scheduler-no-preload-640155" in "kube-system" namespace has status "Ready":"True"
	I1031 00:17:49.170354  248387 pod_ready.go:81] duration metric: took 399.25516ms waiting for pod "kube-scheduler-no-preload-640155" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:49.170369  248387 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:17:47.166417  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:49.665614  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:51.479213  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.979583  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:54.802281  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.663950968s)
	I1031 00:17:54.802401  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:17:54.818228  249055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:17:54.829802  249055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:17:54.841203  249055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:17:54.841254  249055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 00:17:54.900359  249055 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 00:17:54.900453  249055 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:17:55.068403  249055 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:17:55.068563  249055 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:17:55.068676  249055 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:17:55.316737  249055 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:17:51.665839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:53.666626  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:55.319016  249055 out.go:204]   - Generating certificates and keys ...
	I1031 00:17:55.319172  249055 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:17:55.319275  249055 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:17:55.319395  249055 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:17:55.319481  249055 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:17:55.319603  249055 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:17:55.320419  249055 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:17:55.320814  249055 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:17:55.321700  249055 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:17:55.322211  249055 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:17:55.322708  249055 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:17:55.323252  249055 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:17:55.323344  249055 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:17:55.388450  249055 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:17:55.461692  249055 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:17:55.807861  249055 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:17:55.963028  249055 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:17:55.963510  249055 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:17:55.966001  249055 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:17:55.967951  249055 out.go:204]   - Booting up control plane ...
	I1031 00:17:55.968125  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:17:55.968238  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:17:55.968343  249055 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:17:55.989357  249055 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:17:55.990439  249055 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:17:55.990548  249055 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 00:17:56.126548  249055 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:17:56.479126  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.479232  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:56.166722  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:17:58.667319  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:00.980893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.481571  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:04.629984  249055 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502923 seconds
	I1031 00:18:04.630137  249055 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:04.643529  249055 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:05.178336  249055 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:05.178549  249055 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-892233 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 00:18:05.695447  249055 kubeadm.go:322] [bootstrap-token] Using token: g00nr2.87o2mnv2u0jwf81d
	I1031 00:18:01.165232  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:03.166303  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.664899  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:05.696918  249055 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:05.697075  249055 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:05.706237  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 00:18:05.720767  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:05.731239  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:05.736130  249055 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:05.740949  249055 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:05.759998  249055 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 00:18:06.051798  249055 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:06.118986  249055 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:06.119014  249055 kubeadm.go:322] 
	I1031 00:18:06.119078  249055 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:06.119084  249055 kubeadm.go:322] 
	I1031 00:18:06.119179  249055 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:06.119190  249055 kubeadm.go:322] 
	I1031 00:18:06.119225  249055 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:06.119282  249055 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:06.119326  249055 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:06.119332  249055 kubeadm.go:322] 
	I1031 00:18:06.119376  249055 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 00:18:06.119382  249055 kubeadm.go:322] 
	I1031 00:18:06.119424  249055 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 00:18:06.119435  249055 kubeadm.go:322] 
	I1031 00:18:06.119484  249055 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:06.119551  249055 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:06.119677  249055 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:06.119703  249055 kubeadm.go:322] 
	I1031 00:18:06.119830  249055 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 00:18:06.119938  249055 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:06.119957  249055 kubeadm.go:322] 
	I1031 00:18:06.120024  249055 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120179  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:06.120208  249055 kubeadm.go:322] 	--control-plane 
	I1031 00:18:06.120219  249055 kubeadm.go:322] 
	I1031 00:18:06.120330  249055 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:06.120368  249055 kubeadm.go:322] 
	I1031 00:18:06.120468  249055 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token g00nr2.87o2mnv2u0jwf81d \
	I1031 00:18:06.120559  249055 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:06.121091  249055 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:06.121119  249055 cni.go:84] Creating CNI manager for ""
	I1031 00:18:06.121127  249055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:06.123073  249055 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:06.124566  249055 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:06.140064  249055 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:06.171195  249055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:06.171343  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.171359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=default-k8s-diff-port-892233 minikube.k8s.io/updated_at=2023_10_31T00_18_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.256957  249055 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:06.637700  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:06.769942  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.383359  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:07.883621  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.384017  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:08.883751  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:05.979125  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.979280  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.981296  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:07.666495  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:10.165765  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:09.383896  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:09.883523  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.384077  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:10.883546  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.383417  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:11.883493  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.384043  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.884000  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.383479  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:13.884100  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:12.479614  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.978890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:12.666054  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:15.163419  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:14.384001  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:14.884297  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.383607  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:15.883617  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.383591  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:16.884141  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.384112  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:17.884196  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.384156  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:18.883687  249055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:19.114222  249055 kubeadm.go:1081] duration metric: took 12.942949327s to wait for elevateKubeSystemPrivileges.
	I1031 00:18:19.114261  249055 kubeadm.go:406] StartCluster complete in 5m10.335188993s
	I1031 00:18:19.114295  249055 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.114401  249055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:18:19.116632  249055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:18:19.116971  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:18:19.117107  249055 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:18:19.117188  249055 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117202  249055 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117221  249055 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-892233"
	I1031 00:18:19.117231  249055 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:19.117239  249055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-892233"
	W1031 00:18:19.117243  249055 addons.go:240] addon metrics-server should already be in state true
	I1031 00:18:19.117265  249055 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:18:19.117305  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117213  249055 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.117326  249055 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:18:19.117372  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117740  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117746  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117761  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.117711  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.117830  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.134384  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I1031 00:18:19.134426  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I1031 00:18:19.134810  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.134915  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.135437  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135461  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.135648  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.135675  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.136018  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136074  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.136578  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.136625  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.137167  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.137198  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.144184  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I1031 00:18:19.144763  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.145263  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.145293  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.145648  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.145852  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.152132  249055 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-892233"
	W1031 00:18:19.152194  249055 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:18:19.152240  249055 host.go:66] Checking if "default-k8s-diff-port-892233" exists ...
	I1031 00:18:19.152775  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.152867  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.154334  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I1031 00:18:19.155862  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1031 00:18:19.157267  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.158677  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.158735  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.158863  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.164983  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.165014  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.165044  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166267  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.166284  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.169122  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.169199  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.174627  249055 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:18:19.170934  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.176219  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:18:19.177591  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:18:19.177619  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.179052  249055 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:18:19.176693  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45785
	I1031 00:18:19.178184  249055 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-892233" context rescaled to 1 replicas
	I1031 00:18:19.179171  249055 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:18:19.181526  249055 out.go:177] * Verifying Kubernetes components...
	I1031 00:18:19.182930  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:16.980163  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:18.981179  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:17.165555  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.174245  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:19.181603  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.184667  249055 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.184676  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.184683  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:18:19.184698  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.179546  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.184702  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.182398  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.184914  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.185097  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.185743  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.185761  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.185827  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.186516  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.187946  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.187988  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.188014  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.188359  249055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:18:19.188374  249055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:18:19.188549  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.188757  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.189003  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.189160  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.203564  249055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1031 00:18:19.203935  249055 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:18:19.204374  249055 main.go:141] libmachine: Using API Version  1
	I1031 00:18:19.204399  249055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:18:19.204741  249055 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:18:19.204994  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetState
	I1031 00:18:19.207012  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .DriverName
	I1031 00:18:19.207266  249055 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.207283  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:18:19.207302  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHHostname
	I1031 00:18:19.209950  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210314  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:e2:1e", ip: ""} in network mk-default-k8s-diff-port-892233: {Iface:virbr3 ExpiryTime:2023-10-31 01:12:50 +0000 UTC Type:0 Mac:52:54:00:f4:e2:1e Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:default-k8s-diff-port-892233 Clientid:01:52:54:00:f4:e2:1e}
	I1031 00:18:19.210332  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | domain default-k8s-diff-port-892233 has defined IP address 192.168.39.2 and MAC address 52:54:00:f4:e2:1e in network mk-default-k8s-diff-port-892233
	I1031 00:18:19.210507  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHPort
	I1031 00:18:19.210701  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHKeyPath
	I1031 00:18:19.210830  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .GetSSHUsername
	I1031 00:18:19.210962  249055 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/default-k8s-diff-port-892233/id_rsa Username:docker}
	I1031 00:18:19.423829  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:18:19.423852  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:18:19.440581  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:18:19.466961  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:18:19.511517  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:18:19.511543  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:18:19.591560  249055 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.591588  249055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:18:19.628414  249055 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.628560  249055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:18:19.648329  249055 node_ready.go:49] node "default-k8s-diff-port-892233" has status "Ready":"True"
	I1031 00:18:19.648353  249055 node_ready.go:38] duration metric: took 19.904402ms waiting for node "default-k8s-diff-port-892233" to be "Ready" ...
	I1031 00:18:19.648364  249055 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:19.658333  249055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:18:19.692147  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.904902  249055 pod_ready.go:102] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:22.104924  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.637923019s)
	I1031 00:18:22.104999  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.104997  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.664373813s)
	I1031 00:18:22.105008  249055 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476413511s)
	I1031 00:18:22.105035  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105013  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105052  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105035  249055 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 00:18:22.105350  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105366  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105376  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105388  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105479  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) DBG | Closing plugin on server side
	I1031 00:18:22.105541  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105554  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105573  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.105594  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.105821  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105852  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.105860  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.105870  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.146205  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.146231  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.146598  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.146631  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.219948  249055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.561551335s)
	I1031 00:18:22.220017  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220033  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220412  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220441  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220459  249055 main.go:141] libmachine: Making call to close driver server
	I1031 00:18:22.220474  249055 main.go:141] libmachine: (default-k8s-diff-port-892233) Calling .Close
	I1031 00:18:22.220820  249055 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:18:22.220840  249055 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:18:22.220853  249055 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-892233"
	I1031 00:18:22.222793  249055 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:18:22.224194  249055 addons.go:502] enable addons completed in 3.107083845s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:18:22.880805  249055 pod_ready.go:92] pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:22.880840  249055 pod_ready.go:81] duration metric: took 3.18866819s waiting for pod "coredns-5dd5756b68-j9g85" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:22.880853  249055 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912036  249055 pod_ready.go:92] pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.912066  249055 pod_ready.go:81] duration metric: took 1.031204489s waiting for pod "coredns-5dd5756b68-pjtg4" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.912079  249055 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918589  249055 pod_ready.go:92] pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.918609  249055 pod_ready.go:81] duration metric: took 6.523247ms waiting for pod "etcd-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.918619  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925040  249055 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:23.925059  249055 pod_ready.go:81] duration metric: took 6.434141ms waiting for pod "kube-apiserver-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:23.925067  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073002  249055 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.073029  249055 pod_ready.go:81] duration metric: took 147.953037ms waiting for pod "kube-controller-manager-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.073044  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:21.478451  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.479849  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:24.473158  249055 pod_ready.go:92] pod "kube-proxy-77gzz" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.473184  249055 pod_ready.go:81] duration metric: took 400.13282ms waiting for pod "kube-proxy-77gzz" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.473194  249055 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873506  249055 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace has status "Ready":"True"
	I1031 00:18:24.873528  249055 pod_ready.go:81] duration metric: took 400.328112ms waiting for pod "kube-scheduler-default-k8s-diff-port-892233" in "kube-system" namespace to be "Ready" ...
	I1031 00:18:24.873538  249055 pod_ready.go:38] duration metric: took 5.225163782s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:24.873558  249055 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:18:24.873617  249055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:18:24.890474  249055 api_server.go:72] duration metric: took 5.711236569s to wait for apiserver process to appear ...
	I1031 00:18:24.890508  249055 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:18:24.890533  249055 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8444/healthz ...
	I1031 00:18:24.896826  249055 api_server.go:279] https://192.168.39.2:8444/healthz returned 200:
	ok
	I1031 00:18:24.898203  249055 api_server.go:141] control plane version: v1.28.3
	I1031 00:18:24.898226  249055 api_server.go:131] duration metric: took 7.708512ms to wait for apiserver health ...
	I1031 00:18:24.898234  249055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:18:25.076806  249055 system_pods.go:59] 9 kube-system pods found
	I1031 00:18:25.076835  249055 system_pods.go:61] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.076840  249055 system_pods.go:61] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.076845  249055 system_pods.go:61] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.076850  249055 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.076854  249055 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.076857  249055 system_pods.go:61] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.076861  249055 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.076868  249055 system_pods.go:61] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.076874  249055 system_pods.go:61] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.076882  249055 system_pods.go:74] duration metric: took 178.64211ms to wait for pod list to return data ...
	I1031 00:18:25.076889  249055 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:18:25.272531  249055 default_sa.go:45] found service account: "default"
	I1031 00:18:25.272557  249055 default_sa.go:55] duration metric: took 195.662215ms for default service account to be created ...
	I1031 00:18:25.272567  249055 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:18:25.477225  249055 system_pods.go:86] 9 kube-system pods found
	I1031 00:18:25.477258  249055 system_pods.go:89] "coredns-5dd5756b68-j9g85" [e4534565-4d9b-44d6-bcf1-5b57645645bc] Running
	I1031 00:18:25.477266  249055 system_pods.go:89] "coredns-5dd5756b68-pjtg4" [6c771175-3c51-4988-8b90-58ff0e33a5f8] Running
	I1031 00:18:25.477275  249055 system_pods.go:89] "etcd-default-k8s-diff-port-892233" [47dea79e-371e-45ff-960e-41e96a4427e5] Running
	I1031 00:18:25.477282  249055 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-892233" [87be303c-6850-4ab1-98a3-c8a08f601965] Running
	I1031 00:18:25.477292  249055 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-892233" [7533baa8-87b4-4fa9-8385-9945e0fffaf4] Running
	I1031 00:18:25.477298  249055 system_pods.go:89] "kube-proxy-77gzz" [e7cb1c4a-2ad0-47b9-bca4-2e03d4e1cf39] Running
	I1031 00:18:25.477309  249055 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-892233" [b7630ce4-db97-45a6-a9a3-f7b8f3128182] Running
	I1031 00:18:25.477323  249055 system_pods.go:89] "metrics-server-57f55c9bc5-8pc87" [c91683ff-11bf-4530-90c3-91f4b28e2dab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:18:25.477333  249055 system_pods.go:89] "storage-provisioner" [995d33e4-0d28-4efb-8d30-d5a05d04b61c] Running
	I1031 00:18:25.477343  249055 system_pods.go:126] duration metric: took 204.769317ms to wait for k8s-apps to be running ...
	I1031 00:18:25.477356  249055 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:18:25.477416  249055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:25.494054  249055 system_svc.go:56] duration metric: took 16.688482ms WaitForService to wait for kubelet.
	I1031 00:18:25.494079  249055 kubeadm.go:581] duration metric: took 6.314858374s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:18:25.494097  249055 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:18:25.673698  249055 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:18:25.673729  249055 node_conditions.go:123] node cpu capacity is 2
	I1031 00:18:25.673742  249055 node_conditions.go:105] duration metric: took 179.63938ms to run NodePressure ...
	I1031 00:18:25.673756  249055 start.go:228] waiting for startup goroutines ...
	I1031 00:18:25.673764  249055 start.go:233] waiting for cluster config update ...
	I1031 00:18:25.673778  249055 start.go:242] writing updated cluster config ...
	I1031 00:18:25.674107  249055 ssh_runner.go:195] Run: rm -f paused
	I1031 00:18:25.729477  249055 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:18:25.731433  249055 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-892233" cluster and "default" namespace by default
	I1031 00:18:21.666578  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:23.667065  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:25.980194  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:27.983361  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:26.166839  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:28.664820  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.665038  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:30.478938  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:32.980862  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:33.164907  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.165601  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:35.479491  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.979837  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:37.167604  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:39.665586  248084 pod_ready.go:102] pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.982368  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:44.476905  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:41.359122  248084 pod_ready.go:81] duration metric: took 4m0.000818862s waiting for pod "metrics-server-74d5856cc6-l6gmw" in "kube-system" namespace to be "Ready" ...
	E1031 00:18:41.359173  248084 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:18:41.359193  248084 pod_ready.go:38] duration metric: took 4m1.201522433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:18:41.359227  248084 kubeadm.go:640] restartCluster took 5m7.223824608s
	W1031 00:18:41.359305  248084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1031 00:18:41.359335  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1031 00:18:46.480820  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:48.487440  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:46.413914  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.054544075s)
	I1031 00:18:46.414001  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:18:46.427362  248084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 00:18:46.436557  248084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 00:18:46.444929  248084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 00:18:46.445010  248084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1031 00:18:46.659252  248084 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 00:18:50.978966  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:52.980133  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.061122  248084 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1031 00:18:59.061211  248084 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 00:18:59.061324  248084 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 00:18:59.061476  248084 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 00:18:59.061695  248084 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 00:18:59.061861  248084 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 00:18:59.061989  248084 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 00:18:59.062059  248084 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1031 00:18:59.062158  248084 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 00:18:59.063991  248084 out.go:204]   - Generating certificates and keys ...
	I1031 00:18:59.064091  248084 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 00:18:59.064178  248084 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 00:18:59.064261  248084 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1031 00:18:59.064320  248084 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1031 00:18:59.064400  248084 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1031 00:18:59.064478  248084 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1031 00:18:59.064590  248084 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1031 00:18:59.064687  248084 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1031 00:18:59.064777  248084 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1031 00:18:59.064884  248084 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1031 00:18:59.064967  248084 kubeadm.go:322] [certs] Using the existing "sa" key
	I1031 00:18:59.065056  248084 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 00:18:59.065123  248084 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 00:18:59.065199  248084 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 00:18:59.065284  248084 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 00:18:59.065375  248084 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 00:18:59.065483  248084 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 00:18:59.067362  248084 out.go:204]   - Booting up control plane ...
	I1031 00:18:59.067477  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 00:18:59.067584  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 00:18:59.067655  248084 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 00:18:59.067761  248084 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 00:18:59.067952  248084 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 00:18:59.068089  248084 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004306 seconds
	I1031 00:18:59.068174  248084 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 00:18:59.068330  248084 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 00:18:59.068419  248084 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 00:18:59.068536  248084 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-225140 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1031 00:18:59.068585  248084 kubeadm.go:322] [bootstrap-token] Using token: 1g4jse.zc5opkcf3va44z15
	I1031 00:18:59.070040  248084 out.go:204]   - Configuring RBAC rules ...
	I1031 00:18:59.070142  248084 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 00:18:59.070305  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 00:18:59.070451  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 00:18:59.070569  248084 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 00:18:59.070657  248084 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 00:18:59.070700  248084 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 00:18:59.070742  248084 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 00:18:59.070748  248084 kubeadm.go:322] 
	I1031 00:18:59.070799  248084 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 00:18:59.070809  248084 kubeadm.go:322] 
	I1031 00:18:59.070900  248084 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 00:18:59.070912  248084 kubeadm.go:322] 
	I1031 00:18:59.070933  248084 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 00:18:59.070983  248084 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 00:18:59.071030  248084 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 00:18:59.071035  248084 kubeadm.go:322] 
	I1031 00:18:59.071082  248084 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 00:18:59.071158  248084 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 00:18:59.071269  248084 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 00:18:59.071278  248084 kubeadm.go:322] 
	I1031 00:18:59.071392  248084 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1031 00:18:59.071498  248084 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 00:18:59.071509  248084 kubeadm.go:322] 
	I1031 00:18:59.071608  248084 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.071749  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 \
	I1031 00:18:59.071783  248084 kubeadm.go:322]     --control-plane 	  
	I1031 00:18:59.071793  248084 kubeadm.go:322] 
	I1031 00:18:59.071899  248084 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 00:18:59.071912  248084 kubeadm.go:322] 
	I1031 00:18:59.072051  248084 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1g4jse.zc5opkcf3va44z15 \
	I1031 00:18:59.072196  248084 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f674979f395aea0c6a3497811fde41e4dd7988f915ffc1c0c19ddda431d4233 
	I1031 00:18:59.072228  248084 cni.go:84] Creating CNI manager for ""
	I1031 00:18:59.072243  248084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:18:59.073949  248084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 00:18:55.479295  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:57.983131  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:18:59.075900  248084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 00:18:59.087288  248084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 00:18:59.112130  248084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 00:18:59.112241  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.112258  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4 minikube.k8s.io/name=old-k8s-version-225140 minikube.k8s.io/updated_at=2023_10_31T00_18_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.144297  248084 ops.go:34] apiserver oom_adj: -16
	I1031 00:18:59.352655  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:18:59.464268  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.069316  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.569382  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:00.481532  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:02.978563  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:01.069124  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:01.569535  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.069209  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:02.569292  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.069280  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:03.569469  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.069050  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:04.569082  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.068795  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.569625  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:05.479444  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:07.980592  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:09.982873  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:06.069318  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:06.569043  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.069599  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:07.569098  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.069690  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:08.569668  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.069735  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:09.569294  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.069080  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:10.569441  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.068991  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:11.569543  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.069495  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:12.568757  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.069012  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.569638  248084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 00:19:13.789009  248084 kubeadm.go:1081] duration metric: took 14.676828073s to wait for elevateKubeSystemPrivileges.
	I1031 00:19:13.789061  248084 kubeadm.go:406] StartCluster complete in 5m39.716410778s
	I1031 00:19:13.789090  248084 settings.go:142] acquiring lock: {Name:mk1313180e12d1f22ab48a8f0a7e0f8d16b3d905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.789209  248084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:19:13.791883  248084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/kubeconfig: {Name:mk263aa208f2563a65a87fc637f32331e8543639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:19:13.792204  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 00:19:13.792368  248084 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 00:19:13.792451  248084 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792457  248084 config.go:182] Loaded profile config "old-k8s-version-225140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1031 00:19:13.792471  248084 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-225140"
	W1031 00:19:13.792480  248084 addons.go:240] addon storage-provisioner should already be in state true
	I1031 00:19:13.792485  248084 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792515  248084 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-225140"
	I1031 00:19:13.792531  248084 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:13.792534  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	W1031 00:19:13.792540  248084 addons.go:240] addon metrics-server should already be in state true
	I1031 00:19:13.792568  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.792516  248084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-225140"
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.792981  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793021  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793104  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.793147  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.793254  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.811115  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34449
	I1031 00:19:13.811377  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I1031 00:19:13.811793  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.811913  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.812411  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812433  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812586  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.812636  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.812764  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.812833  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35585
	I1031 00:19:13.813035  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.813186  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.813284  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.813624  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.813649  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.813896  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.813938  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.813984  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.814742  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.814791  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.817328  248084 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-225140"
	W1031 00:19:13.817352  248084 addons.go:240] addon default-storageclass should already be in state true
	I1031 00:19:13.817383  248084 host.go:66] Checking if "old-k8s-version-225140" exists ...
	I1031 00:19:13.817651  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.817676  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.831410  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I1031 00:19:13.832059  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.832665  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.832686  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.833071  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.833396  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.834672  248084 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-225140" context rescaled to 1 replicas
	I1031 00:19:13.834715  248084 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:19:13.837043  248084 out.go:177] * Verifying Kubernetes components...
	I1031 00:19:13.834927  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1031 00:19:13.835269  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.835504  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I1031 00:19:13.837823  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.838827  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:19:13.840427  248084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 00:19:13.838307  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.839305  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.842067  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.842200  248084 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:13.842220  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 00:19:13.842259  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.842518  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.843110  248084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:19:13.843159  248084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:19:13.843539  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.843577  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.844178  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.844488  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.846259  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.846704  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.848811  248084 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 00:19:12.479334  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:14.484105  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:13.847143  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.847192  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.850295  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.850300  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 00:19:13.850319  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 00:19:13.850341  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.850537  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.850712  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.851115  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.853651  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854192  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.854226  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.854563  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.854758  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.854967  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.855112  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:13.862473  248084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I1031 00:19:13.862970  248084 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:19:13.863496  248084 main.go:141] libmachine: Using API Version  1
	I1031 00:19:13.863526  248084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:19:13.864026  248084 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:19:13.864257  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetState
	I1031 00:19:13.866270  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .DriverName
	I1031 00:19:13.866530  248084 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:13.866546  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 00:19:13.866565  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHHostname
	I1031 00:19:13.870580  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.870992  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:98:61", ip: ""} in network mk-old-k8s-version-225140: {Iface:virbr1 ExpiryTime:2023-10-31 01:13:14 +0000 UTC Type:0 Mac:52:54:00:9c:98:61 Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:old-k8s-version-225140 Clientid:01:52:54:00:9c:98:61}
	I1031 00:19:13.871028  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | domain old-k8s-version-225140 has defined IP address 192.168.72.65 and MAC address 52:54:00:9c:98:61 in network mk-old-k8s-version-225140
	I1031 00:19:13.871142  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHPort
	I1031 00:19:13.871372  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHKeyPath
	I1031 00:19:13.871542  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .GetSSHUsername
	I1031 00:19:13.871678  248084 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/old-k8s-version-225140/id_rsa Username:docker}
	I1031 00:19:14.034938  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 00:19:14.040988  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1031 00:19:14.041016  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1031 00:19:14.061666  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 00:19:14.111727  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1031 00:19:14.111758  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1031 00:19:14.125610  248084 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.125707  248084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 00:19:14.165369  248084 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:14.165397  248084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1031 00:19:14.193366  248084 node_ready.go:49] node "old-k8s-version-225140" has status "Ready":"True"
	I1031 00:19:14.193389  248084 node_ready.go:38] duration metric: took 67.750717ms waiting for node "old-k8s-version-225140" to be "Ready" ...
	I1031 00:19:14.193401  248084 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:14.207505  248084 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:14.276613  248084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1031 00:19:15.572065  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.537074399s)
	I1031 00:19:15.572136  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572152  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572177  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.510470973s)
	I1031 00:19:15.572219  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572238  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572336  248084 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.446596481s)
	I1031 00:19:15.572363  248084 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1031 00:19:15.572603  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572621  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572632  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572642  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572697  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572711  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.572757  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.572778  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.572756  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572908  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.572910  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.572970  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.573533  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.573554  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586186  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.586210  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.586507  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.586530  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.586546  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.700772  248084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.424096792s)
	I1031 00:19:15.700835  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.700851  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701196  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701217  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701230  248084 main.go:141] libmachine: Making call to close driver server
	I1031 00:19:15.701242  248084 main.go:141] libmachine: (old-k8s-version-225140) Calling .Close
	I1031 00:19:15.701531  248084 main.go:141] libmachine: (old-k8s-version-225140) DBG | Closing plugin on server side
	I1031 00:19:15.701561  248084 main.go:141] libmachine: Successfully made call to close driver server
	I1031 00:19:15.701574  248084 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 00:19:15.701585  248084 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-225140"
	I1031 00:19:15.703404  248084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1031 00:19:15.704856  248084 addons.go:502] enable addons completed in 1.91251063s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1031 00:19:16.980629  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:19.478989  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:16.278623  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:18.779192  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.978882  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.981260  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:21.276797  248084 pod_ready.go:102] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:23.277531  248084 pod_ready.go:92] pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.277561  248084 pod_ready.go:81] duration metric: took 9.070020963s waiting for pod "coredns-5644d7b6d9-v4lf9" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.277575  248084 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283345  248084 pod_ready.go:92] pod "kube-proxy-v2pp4" in "kube-system" namespace has status "Ready":"True"
	I1031 00:19:23.283367  248084 pod_ready.go:81] duration metric: took 5.78532ms waiting for pod "kube-proxy-v2pp4" in "kube-system" namespace to be "Ready" ...
	I1031 00:19:23.283374  248084 pod_ready.go:38] duration metric: took 9.089964646s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:19:23.283394  248084 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:19:23.283452  248084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:19:23.300275  248084 api_server.go:72] duration metric: took 9.465522842s to wait for apiserver process to appear ...
	I1031 00:19:23.300294  248084 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:19:23.300308  248084 api_server.go:253] Checking apiserver healthz at https://192.168.72.65:8443/healthz ...
	I1031 00:19:23.309064  248084 api_server.go:279] https://192.168.72.65:8443/healthz returned 200:
	ok
	I1031 00:19:23.310485  248084 api_server.go:141] control plane version: v1.16.0
	I1031 00:19:23.310508  248084 api_server.go:131] duration metric: took 10.207384ms to wait for apiserver health ...
	I1031 00:19:23.310517  248084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:19:23.314181  248084 system_pods.go:59] 4 kube-system pods found
	I1031 00:19:23.314205  248084 system_pods.go:61] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.314210  248084 system_pods.go:61] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.314217  248084 system_pods.go:61] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.314224  248084 system_pods.go:61] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.314230  248084 system_pods.go:74] duration metric: took 3.706807ms to wait for pod list to return data ...
	I1031 00:19:23.314236  248084 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:19:23.316411  248084 default_sa.go:45] found service account: "default"
	I1031 00:19:23.316435  248084 default_sa.go:55] duration metric: took 2.192647ms for default service account to be created ...
	I1031 00:19:23.316443  248084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:19:23.320111  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.320137  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.320148  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.320159  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.320167  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.320190  248084 retry.go:31] will retry after 199.965979ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.524726  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.524754  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.524760  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.524766  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.524773  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.524788  248084 retry.go:31] will retry after 276.623866ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:23.807038  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:23.807066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:23.807072  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:23.807080  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:23.807087  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:23.807104  248084 retry.go:31] will retry after 316.245952ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.128239  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.128268  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.128277  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.128287  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.128297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.128326  248084 retry.go:31] will retry after 483.558456ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:24.616454  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:24.616486  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:24.616494  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:24.616505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:24.616514  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:24.616534  248084 retry.go:31] will retry after 700.807178ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:25.323617  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:25.323666  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:25.323675  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:25.323687  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:25.323697  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:25.323718  248084 retry.go:31] will retry after 768.27646ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:26.485923  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:28.978283  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:26.097257  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:26.097283  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:26.097288  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:26.097295  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:26.097302  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:26.097320  248084 retry.go:31] will retry after 1.004884505s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:27.108295  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:27.108330  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:27.108339  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:27.108350  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:27.108360  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:27.108380  248084 retry.go:31] will retry after 1.256932803s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:28.369629  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:28.369668  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:28.369677  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:28.369688  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:28.369698  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:28.369722  248084 retry.go:31] will retry after 1.554545012s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:29.930268  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:29.930295  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:29.930314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:29.930322  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:29.930338  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:29.930358  248084 retry.go:31] will retry after 1.794325328s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:30.981402  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:33.478794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:31.729473  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:31.729511  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:31.729520  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:31.729531  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:31.729542  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:31.729563  248084 retry.go:31] will retry after 2.111450847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:33.846759  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:33.846787  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:33.846792  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:33.846801  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:33.846807  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:33.846824  248084 retry.go:31] will retry after 2.198886772s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:35.981890  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:38.478284  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:36.050460  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:36.050491  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:36.050496  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:36.050505  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:36.050512  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:36.050530  248084 retry.go:31] will retry after 3.361148685s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:39.417603  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:39.417633  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:39.417640  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:39.417651  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:39.417660  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:39.417680  248084 retry.go:31] will retry after 4.41093106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:40.978990  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.479103  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:43.834041  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:43.834083  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:43.834093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:43.834104  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:43.834115  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:43.834134  248084 retry.go:31] will retry after 5.294476287s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:45.482986  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:47.978397  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.980183  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:49.133233  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:49.133264  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:49.133269  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:49.133276  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:49.133284  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:49.133300  248084 retry.go:31] will retry after 7.429511286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:19:51.980355  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:53.981222  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.480456  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:58.979640  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:19:56.567247  248084 system_pods.go:86] 4 kube-system pods found
	I1031 00:19:56.567278  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:19:56.567284  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:19:56.567290  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:19:56.567297  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:19:56.567314  248084 retry.go:31] will retry after 10.944177906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:01.477606  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:03.481220  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:05.979560  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.984688  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:07.518274  248084 system_pods.go:86] 7 kube-system pods found
	I1031 00:20:07.518300  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:07.518306  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Pending
	I1031 00:20:07.518310  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Pending
	I1031 00:20:07.518314  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:07.518318  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Pending
	I1031 00:20:07.518325  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:07.518331  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:07.518349  248084 retry.go:31] will retry after 8.381829497s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 00:20:10.485015  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:12.978647  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.479489  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:17.980834  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:15.906034  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:15.906066  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:15.906074  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Pending
	I1031 00:20:15.906080  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:15.906087  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:15.906093  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:15.906100  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:15.906109  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:15.906120  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:15.906138  248084 retry.go:31] will retry after 11.167332732s: missing components: etcd
	I1031 00:20:20.481147  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:22.980858  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:24.982265  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:27.080224  248084 system_pods.go:86] 8 kube-system pods found
	I1031 00:20:27.080263  248084 system_pods.go:89] "coredns-5644d7b6d9-v4lf9" [0399403f-e33d-4c8e-8420-c3c0e5c622c2] Running
	I1031 00:20:27.080272  248084 system_pods.go:89] "etcd-old-k8s-version-225140" [c3c7682d-4b48-4e50-ba06-676723621872] Running
	I1031 00:20:27.080279  248084 system_pods.go:89] "kube-apiserver-old-k8s-version-225140" [8452eeb3-bce5-4105-aca6-41c438d0cd33] Running
	I1031 00:20:27.080287  248084 system_pods.go:89] "kube-controller-manager-old-k8s-version-225140" [8d9ce065-09f3-4323-a564-195c4ae96389] Running
	I1031 00:20:27.080294  248084 system_pods.go:89] "kube-proxy-v2pp4" [00b895cf-5155-458e-abf7-d890aa8bdb24] Running
	I1031 00:20:27.080301  248084 system_pods.go:89] "kube-scheduler-old-k8s-version-225140" [aa567dc5-4668-4730-bfee-e1afdac14098] Running
	I1031 00:20:27.080318  248084 system_pods.go:89] "metrics-server-74d5856cc6-hp8k4" [e10e8ea4-e3c4-4db1-911f-8ce365912043] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:20:27.080332  248084 system_pods.go:89] "storage-provisioner" [853c4f0f-7367-4955-a3c1-2972ac938fcd] Running
	I1031 00:20:27.080343  248084 system_pods.go:126] duration metric: took 1m3.763892339s to wait for k8s-apps to be running ...
	I1031 00:20:27.080357  248084 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:20:27.080408  248084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:20:27.098039  248084 system_svc.go:56] duration metric: took 17.670849ms WaitForService to wait for kubelet.
	I1031 00:20:27.098075  248084 kubeadm.go:581] duration metric: took 1m13.263332949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:20:27.098105  248084 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:20:27.101093  248084 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:20:27.101126  248084 node_conditions.go:123] node cpu capacity is 2
	I1031 00:20:27.101182  248084 node_conditions.go:105] duration metric: took 3.066191ms to run NodePressure ...
	I1031 00:20:27.101198  248084 start.go:228] waiting for startup goroutines ...
	I1031 00:20:27.101208  248084 start.go:233] waiting for cluster config update ...
	I1031 00:20:27.101222  248084 start.go:242] writing updated cluster config ...
	I1031 00:20:27.101586  248084 ssh_runner.go:195] Run: rm -f paused
	I1031 00:20:27.157211  248084 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1031 00:20:27.159327  248084 out.go:177] 
	W1031 00:20:27.160872  248084 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1031 00:20:27.163644  248084 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1031 00:20:27.165443  248084 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-225140" cluster and "default" namespace by default
	I1031 00:20:27.481582  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:29.978812  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:32.478965  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:34.479052  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:36.486487  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:38.981098  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:41.478500  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:43.478933  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:45.978794  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:47.978937  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:49.980825  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:52.479268  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:54.978422  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:57.478476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:20:59.478602  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:01.478639  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:03.479969  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:05.978907  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:08.478656  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:10.978877  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:12.981683  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:15.479094  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:17.978893  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:20.479878  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:22.483287  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:24.978077  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:26.979122  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:28.981476  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:31.478577  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:33.479816  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:35.979787  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:37.981859  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:40.477762  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:42.479382  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:44.479508  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:46.479851  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:48.482610  248387 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace has status "Ready":"False"
	I1031 00:21:49.171002  248387 pod_ready.go:81] duration metric: took 4m0.000595541s waiting for pod "metrics-server-57f55c9bc5-d2xg4" in "kube-system" namespace to be "Ready" ...
	E1031 00:21:49.171048  248387 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1031 00:21:49.171063  248387 pod_ready.go:38] duration metric: took 4m2.795014386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 00:21:49.171097  248387 api_server.go:52] waiting for apiserver process to appear ...
	I1031 00:21:49.171149  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:21:49.171248  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:21:49.226512  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.226543  248387 cri.go:89] found id: ""
	I1031 00:21:49.226555  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:21:49.226647  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.230993  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:21:49.231060  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:21:49.270646  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:49.270677  248387 cri.go:89] found id: ""
	I1031 00:21:49.270688  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:21:49.270760  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.275165  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:21:49.275225  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:21:49.317730  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:49.317757  248387 cri.go:89] found id: ""
	I1031 00:21:49.317768  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:21:49.317818  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.322362  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:21:49.322430  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:21:49.361430  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.361462  248387 cri.go:89] found id: ""
	I1031 00:21:49.361474  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:21:49.361535  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.365642  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:21:49.365713  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:21:49.409230  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:49.409258  248387 cri.go:89] found id: ""
	I1031 00:21:49.409269  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:21:49.409329  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.413540  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:21:49.413622  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:21:49.458477  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:49.458506  248387 cri.go:89] found id: ""
	I1031 00:21:49.458518  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:21:49.458586  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.462471  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:21:49.462540  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:21:49.498272  248387 cri.go:89] found id: ""
	I1031 00:21:49.498299  248387 logs.go:284] 0 containers: []
	W1031 00:21:49.498309  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:21:49.498316  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:21:49.498386  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:21:49.538677  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.538704  248387 cri.go:89] found id: ""
	I1031 00:21:49.538714  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:21:49.538776  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:21:49.544293  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:21:49.544318  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:21:49.719505  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:21:49.719542  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:21:49.770108  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:21:49.770146  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:21:49.826250  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:21:49.826289  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:21:49.864212  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:21:49.864244  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:21:50.278307  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:21:50.278348  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:21:50.332860  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:21:50.332894  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:21:50.413002  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413224  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413368  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.413524  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.435703  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:21:50.435739  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:21:50.451836  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:21:50.451865  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:21:50.493883  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:21:50.493912  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:21:50.533935  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:21:50.533967  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:21:50.582053  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:21:50.582094  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:21:50.638988  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639021  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:21:50.639177  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:21:50.639191  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639201  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639213  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:21:50.639219  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:21:50.639225  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:21:50.639232  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:00.639748  248387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 00:22:00.663810  248387 api_server.go:72] duration metric: took 4m16.69659563s to wait for apiserver process to appear ...
	I1031 00:22:00.663846  248387 api_server.go:88] waiting for apiserver healthz status ...
	I1031 00:22:00.663904  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:00.663980  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:00.705584  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:00.705611  248387 cri.go:89] found id: ""
	I1031 00:22:00.705620  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:00.705672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.710031  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:00.710113  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:00.747821  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:00.747850  248387 cri.go:89] found id: ""
	I1031 00:22:00.747861  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:00.747926  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.752647  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:00.752733  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:00.802165  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:00.802200  248387 cri.go:89] found id: ""
	I1031 00:22:00.802210  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:00.802274  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.807367  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:00.807451  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:00.846633  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:00.846661  248387 cri.go:89] found id: ""
	I1031 00:22:00.846670  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:00.846736  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.851197  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:00.851282  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:00.891522  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:00.891549  248387 cri.go:89] found id: ""
	I1031 00:22:00.891559  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:00.891624  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.896269  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:00.896369  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:00.937565  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:00.937594  248387 cri.go:89] found id: ""
	I1031 00:22:00.937606  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:00.937672  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:00.942205  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:00.942287  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:00.984788  248387 cri.go:89] found id: ""
	I1031 00:22:00.984814  248387 logs.go:284] 0 containers: []
	W1031 00:22:00.984821  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:00.984827  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:00.984883  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:01.032572  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.032601  248387 cri.go:89] found id: ""
	I1031 00:22:01.032621  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:01.032685  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:01.037253  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:01.037280  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:01.096027  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:01.096065  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:01.166608  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166786  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.166925  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:01.167075  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:01.188441  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:01.188473  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:01.238925  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:01.238961  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:01.278987  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:01.279024  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:01.340249  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:01.340284  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:01.381155  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:01.381191  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:01.421808  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:01.421842  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:01.817836  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:01.817877  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:01.832590  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:01.832620  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:01.961348  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:01.961384  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:02.023997  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:02.024055  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:02.087279  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087321  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:02.087437  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:02.087460  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087476  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087485  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:02.087495  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:02.087513  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:02.087527  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:12.090012  248387 api_server.go:253] Checking apiserver healthz at https://192.168.61.168:8443/healthz ...
	I1031 00:22:12.096458  248387 api_server.go:279] https://192.168.61.168:8443/healthz returned 200:
	ok
	I1031 00:22:12.097833  248387 api_server.go:141] control plane version: v1.28.3
	I1031 00:22:12.097860  248387 api_server.go:131] duration metric: took 11.434005759s to wait for apiserver health ...
	I1031 00:22:12.097872  248387 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 00:22:12.097901  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1031 00:22:12.098004  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1031 00:22:12.161098  248387 cri.go:89] found id: "d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.161129  248387 cri.go:89] found id: ""
	I1031 00:22:12.161140  248387 logs.go:284] 1 containers: [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850]
	I1031 00:22:12.161199  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.166236  248387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1031 00:22:12.166325  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1031 00:22:12.208793  248387 cri.go:89] found id: "07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:12.208815  248387 cri.go:89] found id: ""
	I1031 00:22:12.208824  248387 logs.go:284] 1 containers: [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3]
	I1031 00:22:12.208871  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.213722  248387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1031 00:22:12.213791  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1031 00:22:12.256006  248387 cri.go:89] found id: "12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.256036  248387 cri.go:89] found id: ""
	I1031 00:22:12.256046  248387 logs.go:284] 1 containers: [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e]
	I1031 00:22:12.256116  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.260468  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1031 00:22:12.260546  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1031 00:22:12.305580  248387 cri.go:89] found id: "6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.305608  248387 cri.go:89] found id: ""
	I1031 00:22:12.305618  248387 logs.go:284] 1 containers: [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c]
	I1031 00:22:12.305687  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.313321  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1031 00:22:12.313390  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1031 00:22:12.359900  248387 cri.go:89] found id: "744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.359928  248387 cri.go:89] found id: ""
	I1031 00:22:12.359939  248387 logs.go:284] 1 containers: [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373]
	I1031 00:22:12.360003  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.364087  248387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1031 00:22:12.364171  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1031 00:22:12.403635  248387 cri.go:89] found id: "d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.403660  248387 cri.go:89] found id: ""
	I1031 00:22:12.403675  248387 logs.go:284] 1 containers: [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb]
	I1031 00:22:12.403743  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.408014  248387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1031 00:22:12.408087  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1031 00:22:12.449718  248387 cri.go:89] found id: ""
	I1031 00:22:12.449741  248387 logs.go:284] 0 containers: []
	W1031 00:22:12.449748  248387 logs.go:286] No container was found matching "kindnet"
	I1031 00:22:12.449753  248387 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1031 00:22:12.449802  248387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1031 00:22:12.490301  248387 cri.go:89] found id: "bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.490330  248387 cri.go:89] found id: ""
	I1031 00:22:12.490340  248387 logs.go:284] 1 containers: [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07]
	I1031 00:22:12.490396  248387 ssh_runner.go:195] Run: which crictl
	I1031 00:22:12.495061  248387 logs.go:123] Gathering logs for kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] ...
	I1031 00:22:12.495125  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373"
	I1031 00:22:12.537124  248387 logs.go:123] Gathering logs for kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] ...
	I1031 00:22:12.537163  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb"
	I1031 00:22:12.597600  248387 logs.go:123] Gathering logs for storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] ...
	I1031 00:22:12.597642  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07"
	I1031 00:22:12.637344  248387 logs.go:123] Gathering logs for container status ...
	I1031 00:22:12.637385  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1031 00:22:12.691076  248387 logs.go:123] Gathering logs for describe nodes ...
	I1031 00:22:12.691107  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1031 00:22:12.820546  248387 logs.go:123] Gathering logs for kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] ...
	I1031 00:22:12.820578  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850"
	I1031 00:22:12.871913  248387 logs.go:123] Gathering logs for coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] ...
	I1031 00:22:12.871953  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e"
	I1031 00:22:12.914661  248387 logs.go:123] Gathering logs for kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] ...
	I1031 00:22:12.914705  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c"
	I1031 00:22:12.965771  248387 logs.go:123] Gathering logs for CRI-O ...
	I1031 00:22:12.965810  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1031 00:22:13.352819  248387 logs.go:123] Gathering logs for kubelet ...
	I1031 00:22:13.352862  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1031 00:22:13.424722  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.424906  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425062  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.425220  248387 logs.go:138] Found kubelet problem: Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.447363  248387 logs.go:123] Gathering logs for dmesg ...
	I1031 00:22:13.447393  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1031 00:22:13.462468  248387 logs.go:123] Gathering logs for etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] ...
	I1031 00:22:13.462502  248387 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3"
	I1031 00:22:13.507930  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.507960  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1031 00:22:13.508045  248387 out.go:239] X Problems detected in kubelet:
	W1031 00:22:13.508060  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.857663    4222 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508072  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.857802    4222 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508084  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: W1031 00:17:43.875086    4222 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	W1031 00:22:13.508097  248387 out.go:239]   Oct 31 00:17:43 no-preload-640155 kubelet[4222]: E1031 00:17:43.875123    4222 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-640155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-640155' and this object
	I1031 00:22:13.508107  248387 out.go:309] Setting ErrFile to fd 2...
	I1031 00:22:13.508114  248387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:22:23.516544  248387 system_pods.go:59] 8 kube-system pods found
	I1031 00:22:23.516574  248387 system_pods.go:61] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.516579  248387 system_pods.go:61] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.516584  248387 system_pods.go:61] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.516588  248387 system_pods.go:61] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.516592  248387 system_pods.go:61] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.516597  248387 system_pods.go:61] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.516604  248387 system_pods.go:61] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.516613  248387 system_pods.go:61] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.516620  248387 system_pods.go:74] duration metric: took 11.418741675s to wait for pod list to return data ...
	I1031 00:22:23.516630  248387 default_sa.go:34] waiting for default service account to be created ...
	I1031 00:22:23.520026  248387 default_sa.go:45] found service account: "default"
	I1031 00:22:23.520050  248387 default_sa.go:55] duration metric: took 3.413856ms for default service account to be created ...
	I1031 00:22:23.520058  248387 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 00:22:23.526672  248387 system_pods.go:86] 8 kube-system pods found
	I1031 00:22:23.526704  248387 system_pods.go:89] "coredns-5dd5756b68-gp6pj" [b7086342-a1ed-42b3-819a-ad7d8211ad17] Running
	I1031 00:22:23.526712  248387 system_pods.go:89] "etcd-no-preload-640155" [d9381fc3-0181-4631-90e7-6749d37cf8ab] Running
	I1031 00:22:23.526719  248387 system_pods.go:89] "kube-apiserver-no-preload-640155" [26b9547d-6b10-428a-a26f-47b007f06402] Running
	I1031 00:22:23.526729  248387 system_pods.go:89] "kube-controller-manager-no-preload-640155" [7b5ec3dd-11a2-4409-a271-e3f4149c49fe] Running
	I1031 00:22:23.526736  248387 system_pods.go:89] "kube-proxy-pkjsl" [3cc67cf4-4a59-42bf-a6ca-b2be409f5077] Running
	I1031 00:22:23.526753  248387 system_pods.go:89] "kube-scheduler-no-preload-640155" [f027c450-e0ac-4184-88c8-5de421603b25] Running
	I1031 00:22:23.526765  248387 system_pods.go:89] "metrics-server-57f55c9bc5-d2xg4" [b16ae9e6-6deb-485f-af5c-35cafada4a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 00:22:23.526776  248387 system_pods.go:89] "storage-provisioner" [acf2b5d0-1773-4ee6-882d-daff300f9d80] Running
	I1031 00:22:23.526789  248387 system_pods.go:126] duration metric: took 6.724214ms to wait for k8s-apps to be running ...
	I1031 00:22:23.526801  248387 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 00:22:23.526862  248387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 00:22:23.546006  248387 system_svc.go:56] duration metric: took 19.183151ms WaitForService to wait for kubelet.
	I1031 00:22:23.546038  248387 kubeadm.go:581] duration metric: took 4m39.57883274s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 00:22:23.546066  248387 node_conditions.go:102] verifying NodePressure condition ...
	I1031 00:22:23.550930  248387 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 00:22:23.550975  248387 node_conditions.go:123] node cpu capacity is 2
	I1031 00:22:23.551004  248387 node_conditions.go:105] duration metric: took 4.930974ms to run NodePressure ...
	I1031 00:22:23.551041  248387 start.go:228] waiting for startup goroutines ...
	I1031 00:22:23.551053  248387 start.go:233] waiting for cluster config update ...
	I1031 00:22:23.551064  248387 start.go:242] writing updated cluster config ...
	I1031 00:22:23.551346  248387 ssh_runner.go:195] Run: rm -f paused
	I1031 00:22:23.603812  248387 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1031 00:22:23.605925  248387 out.go:177] * Done! kubectl is now configured to use "no-preload-640155" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 00:13:13 UTC, ends at Tue 2023-10-31 00:31:57 UTC. --
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.758975886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712317758960511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=370d0133-f72c-4027-a779-f32de9cfd29a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.759549783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6673e3b8-1eb9-4ada-80ee-1ae42e5f9ef9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.759624142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6673e3b8-1eb9-4ada-80ee-1ae42e5f9ef9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.760871447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b,PodSandboxId:1246bdda0a39d80178f654eadbe303e6eb499605f05298fbf1124a8c49427c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711556475982706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c4f0f-7367-4955-a3c1-2972ac938fcd,},Annotations:map[string]string{io.kubernetes.container.hash: 964889a,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06,PodSandboxId:14505ca26a429c2977493ec204cea4662864280d2f58a40936dca4b50aeb343b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698711555966763404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v2pp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b895cf-5155-458e-abf7-d890aa8bdb24,},Annotations:map[string]string{io.kubernetes.container.hash: fa9b7280,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779,PodSandboxId:f73265bdfb045aa1e48a0fa45c6f3f5237de14c31d459a21a46f44fd5dd75b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698711554882877203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v4lf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0399403f-e33d-4c8e-8420-c3c0e5c622c2,},Annotations:map[string]string{io.kubernetes.container.hash: 2807ee09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c,PodSandboxId:f229a50755adc4acc8b68706063b0745efae6f91c1fe2645c96686dacf5d67a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698711530359275319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f111d3056f4e1d7adaf55ddf5c5337f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d9ba4352,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475,PodSandboxId:2cf58aef9978b4b3d583849c4e7d138c1f0a6a1c9f99534cc106e8f2592ced86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698711528829931236,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52,PodSandboxId:35c012b3d849ae8eb3c439a853faa09e996cbd8cab2157abf7e6016fcb2ba3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698711528848418844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf9a2574e05b88952239bf0bd14806a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 642ee56e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871,PodSandboxId:44d3e09316994598c639f386b05bc5658953fb47908e9c5ce265e0d79fbd8b0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698711528802004516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map
[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6673e3b8-1eb9-4ada-80ee-1ae42e5f9ef9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.807473643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1276feaf-be1f-40e8-8cb5-cd14fd9359e5 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.807610822Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1276feaf-be1f-40e8-8cb5-cd14fd9359e5 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.809014669Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2acc7e9f-6537-42fb-b00b-36552414bc71 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.809488090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712317809472878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=2acc7e9f-6537-42fb-b00b-36552414bc71 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.810365369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f4f9669d-3444-42ac-a41e-b950790e2866 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.810413770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f4f9669d-3444-42ac-a41e-b950790e2866 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.810566223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b,PodSandboxId:1246bdda0a39d80178f654eadbe303e6eb499605f05298fbf1124a8c49427c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711556475982706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c4f0f-7367-4955-a3c1-2972ac938fcd,},Annotations:map[string]string{io.kubernetes.container.hash: 964889a,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06,PodSandboxId:14505ca26a429c2977493ec204cea4662864280d2f58a40936dca4b50aeb343b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698711555966763404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v2pp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b895cf-5155-458e-abf7-d890aa8bdb24,},Annotations:map[string]string{io.kubernetes.container.hash: fa9b7280,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779,PodSandboxId:f73265bdfb045aa1e48a0fa45c6f3f5237de14c31d459a21a46f44fd5dd75b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698711554882877203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v4lf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0399403f-e33d-4c8e-8420-c3c0e5c622c2,},Annotations:map[string]string{io.kubernetes.container.hash: 2807ee09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c,PodSandboxId:f229a50755adc4acc8b68706063b0745efae6f91c1fe2645c96686dacf5d67a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698711530359275319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f111d3056f4e1d7adaf55ddf5c5337f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d9ba4352,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475,PodSandboxId:2cf58aef9978b4b3d583849c4e7d138c1f0a6a1c9f99534cc106e8f2592ced86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698711528829931236,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52,PodSandboxId:35c012b3d849ae8eb3c439a853faa09e996cbd8cab2157abf7e6016fcb2ba3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698711528848418844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf9a2574e05b88952239bf0bd14806a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 642ee56e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871,PodSandboxId:44d3e09316994598c639f386b05bc5658953fb47908e9c5ce265e0d79fbd8b0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698711528802004516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map
[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f4f9669d-3444-42ac-a41e-b950790e2866 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.849394140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9502feca-9a58-4004-a15d-0e3264a2010b name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.849451975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9502feca-9a58-4004-a15d-0e3264a2010b name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.850563389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ab52c112-aeef-46e4-a5b4-9bcc74fd7e29 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.850973339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712317850956279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=ab52c112-aeef-46e4-a5b4-9bcc74fd7e29 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.851605781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=29f4f054-d9a1-4835-b1d9-0ece84463f3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.851665879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=29f4f054-d9a1-4835-b1d9-0ece84463f3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.851839671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b,PodSandboxId:1246bdda0a39d80178f654eadbe303e6eb499605f05298fbf1124a8c49427c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711556475982706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c4f0f-7367-4955-a3c1-2972ac938fcd,},Annotations:map[string]string{io.kubernetes.container.hash: 964889a,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06,PodSandboxId:14505ca26a429c2977493ec204cea4662864280d2f58a40936dca4b50aeb343b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698711555966763404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v2pp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b895cf-5155-458e-abf7-d890aa8bdb24,},Annotations:map[string]string{io.kubernetes.container.hash: fa9b7280,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779,PodSandboxId:f73265bdfb045aa1e48a0fa45c6f3f5237de14c31d459a21a46f44fd5dd75b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698711554882877203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v4lf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0399403f-e33d-4c8e-8420-c3c0e5c622c2,},Annotations:map[string]string{io.kubernetes.container.hash: 2807ee09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c,PodSandboxId:f229a50755adc4acc8b68706063b0745efae6f91c1fe2645c96686dacf5d67a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698711530359275319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f111d3056f4e1d7adaf55ddf5c5337f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d9ba4352,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475,PodSandboxId:2cf58aef9978b4b3d583849c4e7d138c1f0a6a1c9f99534cc106e8f2592ced86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698711528829931236,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52,PodSandboxId:35c012b3d849ae8eb3c439a853faa09e996cbd8cab2157abf7e6016fcb2ba3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698711528848418844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf9a2574e05b88952239bf0bd14806a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 642ee56e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871,PodSandboxId:44d3e09316994598c639f386b05bc5658953fb47908e9c5ce265e0d79fbd8b0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698711528802004516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map
[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=29f4f054-d9a1-4835-b1d9-0ece84463f3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.884076892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4ea6f25d-7696-4082-a705-bf223b3e34c3 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.884128876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4ea6f25d-7696-4082-a705-bf223b3e34c3 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.885322685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0fd7052c-4aef-423f-bc7a-a7ba364388fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.885666369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712317885655393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=0fd7052c-4aef-423f-bc7a-a7ba364388fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.886611864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=84b153eb-777d-4a6c-8b78-42f6b57a959e name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.886653065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=84b153eb-777d-4a6c-8b78-42f6b57a959e name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:31:57 old-k8s-version-225140 crio[717]: time="2023-10-31 00:31:57.886833781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b,PodSandboxId:1246bdda0a39d80178f654eadbe303e6eb499605f05298fbf1124a8c49427c68,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698711556475982706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c4f0f-7367-4955-a3c1-2972ac938fcd,},Annotations:map[string]string{io.kubernetes.container.hash: 964889a,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06,PodSandboxId:14505ca26a429c2977493ec204cea4662864280d2f58a40936dca4b50aeb343b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698711555966763404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v2pp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b895cf-5155-458e-abf7-d890aa8bdb24,},Annotations:map[string]string{io.kubernetes.container.hash: fa9b7280,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779,PodSandboxId:f73265bdfb045aa1e48a0fa45c6f3f5237de14c31d459a21a46f44fd5dd75b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698711554882877203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-v4lf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0399403f-e33d-4c8e-8420-c3c0e5c622c2,},Annotations:map[string]string{io.kubernetes.container.hash: 2807ee09,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c,PodSandboxId:f229a50755adc4acc8b68706063b0745efae6f91c1fe2645c96686dacf5d67a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698711530359275319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f111d3056f4e1d7adaf55ddf5c5337f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: d9ba4352,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475,PodSandboxId:2cf58aef9978b4b3d583849c4e7d138c1f0a6a1c9f99534cc106e8f2592ced86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698711528829931236,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52,PodSandboxId:35c012b3d849ae8eb3c439a853faa09e996cbd8cab2157abf7e6016fcb2ba3f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698711528848418844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf9a2574e05b88952239bf0bd14806a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 642ee56e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871,PodSandboxId:44d3e09316994598c639f386b05bc5658953fb47908e9c5ce265e0d79fbd8b0f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698711528802004516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-225140,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map
[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=84b153eb-777d-4a6c-8b78-42f6b57a959e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b02ad2f08464       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   1246bdda0a39d       storage-provisioner
	d54fc00711e05       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   14505ca26a429       kube-proxy-v2pp4
	5191f89ace8c0       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   f73265bdfb045       coredns-5644d7b6d9-v4lf9
	ac12a0d51e792       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   f229a50755adc       etcd-old-k8s-version-225140
	2ef03c12b91aa       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            0                   35c012b3d849a       kube-apiserver-old-k8s-version-225140
	cd07eb095fc37       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   2cf58aef9978b       kube-controller-manager-old-k8s-version-225140
	9e82a22e28885       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   44d3e09316994       kube-scheduler-old-k8s-version-225140
	
	* 
	* ==> coredns [5191f89ace8c0a3b9397f7eeb2ea00c979964318f54e77d8ddb900dd10398779] <==
	* .:53
	2023-10-31T00:19:15.452Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-31T00:19:15.452Z [INFO] CoreDNS-1.6.2
	2023-10-31T00:19:15.452Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-31T00:19:49.147Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2023-10-31T00:19:49.155Z [INFO] 127.0.0.1:52265 - 16011 "HINFO IN 3183437691474010862.4888761306246306044. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00847492s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-225140
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-225140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=old-k8s-version-225140
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T00_18_59_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 00:18:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:31:54 +0000   Tue, 31 Oct 2023 00:18:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:31:54 +0000   Tue, 31 Oct 2023 00:18:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:31:54 +0000   Tue, 31 Oct 2023 00:18:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:31:54 +0000   Tue, 31 Oct 2023 00:18:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.65
	  Hostname:    old-k8s-version-225140
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 4c7d4d13a26248e28e74f239bcad1ca3
	 System UUID:                4c7d4d13-a262-48e2-8e74-f239bcad1ca3
	 Boot ID:                    a9e0c1a2-cd8b-46f5-84d2-b6651a70c64d
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-v4lf9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-225140                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-225140             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-225140    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-v2pp4                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-225140             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                metrics-server-74d5856cc6-hp8k4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-225140     Node old-k8s-version-225140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet, old-k8s-version-225140     Node old-k8s-version-225140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x8 over 13m)  kubelet, old-k8s-version-225140     Node old-k8s-version-225140 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-225140  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct31 00:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073944] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.935184] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.595781] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152881] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.493574] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.637788] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.129336] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.157002] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.123131] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.234741] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[ +19.799605] systemd-fstab-generator[1031]: Ignoring "noauto" for root device
	[  +0.442816] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct31 00:14] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.573148] kauditd_printk_skb: 2 callbacks suppressed
	[Oct31 00:18] systemd-fstab-generator[3138]: Ignoring "noauto" for root device
	[  +0.769444] kauditd_printk_skb: 6 callbacks suppressed
	[Oct31 00:19] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [ac12a0d51e7929fc6d365fd8a7650cea9dcad0da68fb6795a9aa976e1a4bce2c] <==
	* 2023-10-31 00:18:50.517441 I | raft: b2b4141cc3075842 became follower at term 0
	2023-10-31 00:18:50.517461 I | raft: newRaft b2b4141cc3075842 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-31 00:18:50.517476 I | raft: b2b4141cc3075842 became follower at term 1
	2023-10-31 00:18:50.526871 W | auth: simple token is not cryptographically signed
	2023-10-31 00:18:50.531849 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-31 00:18:50.533165 I | etcdserver: b2b4141cc3075842 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-31 00:18:50.533626 I | etcdserver/membership: added member b2b4141cc3075842 [https://192.168.72.65:2380] to cluster 8411952e25aa5a8
	2023-10-31 00:18:50.534867 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-31 00:18:50.535128 I | embed: listening for metrics on http://192.168.72.65:2381
	2023-10-31 00:18:50.535276 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-31 00:18:50.818064 I | raft: b2b4141cc3075842 is starting a new election at term 1
	2023-10-31 00:18:50.818270 I | raft: b2b4141cc3075842 became candidate at term 2
	2023-10-31 00:18:50.818283 I | raft: b2b4141cc3075842 received MsgVoteResp from b2b4141cc3075842 at term 2
	2023-10-31 00:18:50.818292 I | raft: b2b4141cc3075842 became leader at term 2
	2023-10-31 00:18:50.818297 I | raft: raft.node: b2b4141cc3075842 elected leader b2b4141cc3075842 at term 2
	2023-10-31 00:18:50.818804 I | etcdserver: published {Name:old-k8s-version-225140 ClientURLs:[https://192.168.72.65:2379]} to cluster 8411952e25aa5a8
	2023-10-31 00:18:50.818869 I | embed: ready to serve client requests
	2023-10-31 00:18:50.819641 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-31 00:18:50.820469 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-31 00:18:50.820597 I | embed: ready to serve client requests
	2023-10-31 00:18:50.820762 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-31 00:18:50.820900 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-31 00:18:50.821734 I | embed: serving client requests on 192.168.72.65:2379
	2023-10-31 00:28:50.846656 I | mvcc: store.index: compact 647
	2023-10-31 00:28:50.849337 I | mvcc: finished scheduled compaction at 647 (took 1.620531ms)
	
	* 
	* ==> kernel <==
	*  00:31:58 up 18 min,  0 users,  load average: 0.41, 0.23, 0.22
	Linux old-k8s-version-225140 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2ef03c12b91aae1b3775189b385685a0f56a34aa7e9cdd6c2ed14c7925555a52] <==
	* I1031 00:24:55.193508       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:24:55.193879       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:24:55.193946       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:24:55.193968       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:26:55.194668       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:26:55.194806       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:26:55.194867       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:26:55.194876       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:28:55.196384       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:28:55.196731       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:28:55.196865       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:28:55.196927       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:29:55.197462       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:29:55.197549       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:29:55.197675       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:29:55.197712       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:31:55.198186       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 00:31:55.198443       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 00:31:55.198502       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:31:55.198510       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [cd07eb095fc372d4ae6d5528a949555e0d68c1a988d72570d8540b179a5bb475] <==
	* W1031 00:25:38.134177       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:25:47.311323       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:26:10.136439       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:26:17.563620       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:26:42.138434       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:26:47.815689       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:27:14.140932       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:27:18.067551       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:27:46.143067       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:27:48.320333       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:28:18.145062       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:28:18.572827       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1031 00:28:48.824394       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:28:50.147461       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:29:19.076387       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:29:22.149838       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:29:49.328531       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:29:54.151803       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:30:19.581084       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:30:26.154107       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:30:49.833695       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:30:58.156307       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:31:20.086047       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 00:31:30.158267       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 00:31:50.338417       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [d54fc00711e05416df6782fd1f612a41b9d9f4e8423c613d74902452c45a5d06] <==
	* W1031 00:19:16.241720       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1031 00:19:16.255057       1 node.go:135] Successfully retrieved node IP: 192.168.72.65
	I1031 00:19:16.255379       1 server_others.go:149] Using iptables Proxier.
	I1031 00:19:16.256391       1 server.go:529] Version: v1.16.0
	I1031 00:19:16.258630       1 config.go:313] Starting service config controller
	I1031 00:19:16.258679       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1031 00:19:16.258716       1 config.go:131] Starting endpoints config controller
	I1031 00:19:16.258725       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1031 00:19:16.365846       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1031 00:19:16.365943       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [9e82a22e28885a3c8d434e75d2ea82b563999341215ebcb4251dfcc84e6f7871] <==
	* W1031 00:18:54.190406       1 authentication.go:79] Authentication is disabled
	I1031 00:18:54.190421       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1031 00:18:54.190826       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1031 00:18:54.237335       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 00:18:54.247097       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 00:18:54.248533       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 00:18:54.249425       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:54.250879       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:18:54.250919       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 00:18:54.250961       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 00:18:54.250984       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 00:18:54.251039       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 00:18:54.251071       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 00:18:54.254600       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:55.239380       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 00:18:55.249921       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 00:18:55.259490       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 00:18:55.259952       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 00:18:55.261107       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:18:55.263157       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 00:18:55.264692       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 00:18:55.266524       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 00:18:55.267594       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 00:18:55.271037       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 00:18:55.271932       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 00:13:13 UTC, ends at Tue 2023-10-31 00:31:58 UTC. --
	Oct 31 00:27:35 old-k8s-version-225140 kubelet[3144]: E1031 00:27:35.671927    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:27:50 old-k8s-version-225140 kubelet[3144]: E1031 00:27:50.670299    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:01 old-k8s-version-225140 kubelet[3144]: E1031 00:28:01.670281    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:14 old-k8s-version-225140 kubelet[3144]: E1031 00:28:14.670177    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:28 old-k8s-version-225140 kubelet[3144]: E1031 00:28:28.670860    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:40 old-k8s-version-225140 kubelet[3144]: E1031 00:28:40.670260    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:28:47 old-k8s-version-225140 kubelet[3144]: E1031 00:28:47.765508    3144 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Oct 31 00:28:53 old-k8s-version-225140 kubelet[3144]: E1031 00:28:53.671165    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:29:05 old-k8s-version-225140 kubelet[3144]: E1031 00:29:05.670048    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:29:19 old-k8s-version-225140 kubelet[3144]: E1031 00:29:19.671619    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:29:31 old-k8s-version-225140 kubelet[3144]: E1031 00:29:31.670072    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:29:44 old-k8s-version-225140 kubelet[3144]: E1031 00:29:44.670046    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:29:56 old-k8s-version-225140 kubelet[3144]: E1031 00:29:56.670584    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:30:11 old-k8s-version-225140 kubelet[3144]: E1031 00:30:11.690811    3144 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:30:11 old-k8s-version-225140 kubelet[3144]: E1031 00:30:11.690893    3144 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:30:11 old-k8s-version-225140 kubelet[3144]: E1031 00:30:11.690949    3144 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 31 00:30:11 old-k8s-version-225140 kubelet[3144]: E1031 00:30:11.690976    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Oct 31 00:30:25 old-k8s-version-225140 kubelet[3144]: E1031 00:30:25.673143    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:30:37 old-k8s-version-225140 kubelet[3144]: E1031 00:30:37.674914    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:30:50 old-k8s-version-225140 kubelet[3144]: E1031 00:30:50.670038    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:31:03 old-k8s-version-225140 kubelet[3144]: E1031 00:31:03.670101    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:31:17 old-k8s-version-225140 kubelet[3144]: E1031 00:31:17.673447    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:31:28 old-k8s-version-225140 kubelet[3144]: E1031 00:31:28.670063    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:31:39 old-k8s-version-225140 kubelet[3144]: E1031 00:31:39.670254    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 00:31:54 old-k8s-version-225140 kubelet[3144]: E1031 00:31:54.670177    3144 pod_workers.go:191] Error syncing pod e10e8ea4-e3c4-4db1-911f-8ce365912043 ("metrics-server-74d5856cc6-hp8k4_kube-system(e10e8ea4-e3c4-4db1-911f-8ce365912043)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [0b02ad2f08464e7bf0d0e0152d98a9e3ea4fd9c61fb13c820cd953360ac9df5b] <==
	* I1031 00:19:16.641666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 00:19:16.654394       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 00:19:16.654692       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 00:19:16.665081       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 00:19:16.665954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-225140_ce0dc0db-8787-4c4d-97f0-3234b29ab329!
	I1031 00:19:16.666680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c9cfe66-bee6-4ee2-a864-0a1337880c73", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-225140_ce0dc0db-8787-4c4d-97f0-3234b29ab329 became leader
	I1031 00:19:16.767851       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-225140_ce0dc0db-8787-4c4d-97f0-3234b29ab329!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-225140 -n old-k8s-version-225140
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-225140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-hp8k4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-225140 describe pod metrics-server-74d5856cc6-hp8k4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-225140 describe pod metrics-server-74d5856cc6-hp8k4: exit status 1 (90.977878ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-hp8k4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-225140 describe pod metrics-server-74d5856cc6-hp8k4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (148.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (60.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-640155 -n no-preload-640155
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-31 00:32:25.833806838 +0000 UTC m=+5448.083823746
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-640155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-640155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.785µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-640155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-640155 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-640155 logs -n 25: (1.235143468s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-237143                              | stopped-upgrade-237143       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-225140        | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC | 31 Oct 23 00:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-640155             | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| start   | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:05 UTC | 31 Oct 23 00:06 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-078843            | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-663908                              | cert-expiration-663908       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-221554 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:06 UTC |
	|         | disable-driver-mounts-221554                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:06 UTC | 31 Oct 23 00:07 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-225140             | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:20 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-892233  | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-640155                  | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-640155                                   | no-preload-640155            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:07 UTC | 31 Oct 23 00:22 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-078843                 | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-078843                                  | embed-certs-078843           | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:08 UTC | 31 Oct 23 00:17 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-892233       | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-892233 | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:09 UTC | 31 Oct 23 00:18 UTC |
	|         | default-k8s-diff-port-892233                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-225140                              | old-k8s-version-225140       | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:31 UTC | 31 Oct 23 00:32 UTC |
	| start   | -p newest-cni-558362 --memory=2200 --alsologtostderr   | newest-cni-558362            | jenkins | v1.32.0-beta.0 | 31 Oct 23 00:32 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 00:32:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 00:32:00.553239  254175 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:32:00.553593  254175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:32:00.553609  254175 out.go:309] Setting ErrFile to fd 2...
	I1031 00:32:00.553616  254175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:32:00.553913  254175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:32:00.554579  254175 out.go:303] Setting JSON to false
	I1031 00:32:00.555635  254175 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29673,"bootTime":1698682648,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:32:00.555694  254175 start.go:138] virtualization: kvm guest
	I1031 00:32:00.558168  254175 out.go:177] * [newest-cni-558362] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:32:00.559819  254175 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:32:00.559921  254175 notify.go:220] Checking for updates...
	I1031 00:32:00.561507  254175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:32:00.563149  254175 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:32:00.564842  254175 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:32:00.566329  254175 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:32:00.567680  254175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:32:00.569487  254175 config.go:182] Loaded profile config "default-k8s-diff-port-892233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:32:00.569605  254175 config.go:182] Loaded profile config "embed-certs-078843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:32:00.569709  254175 config.go:182] Loaded profile config "no-preload-640155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:32:00.569832  254175 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:32:00.609672  254175 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 00:32:00.611041  254175 start.go:298] selected driver: kvm2
	I1031 00:32:00.611060  254175 start.go:902] validating driver "kvm2" against <nil>
	I1031 00:32:00.611071  254175 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:32:00.611878  254175 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:32:00.611959  254175 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 00:32:00.629717  254175 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 00:32:00.629815  254175 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1031 00:32:00.629847  254175 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1031 00:32:00.630257  254175 start_flags.go:953] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1031 00:32:00.630371  254175 cni.go:84] Creating CNI manager for ""
	I1031 00:32:00.630390  254175 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1031 00:32:00.630405  254175 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1031 00:32:00.630420  254175 start_flags.go:323] config:
	{Name:newest-cni-558362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-558362 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 00:32:00.630704  254175 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 00:32:00.632759  254175 out.go:177] * Starting control plane node newest-cni-558362 in cluster newest-cni-558362
	I1031 00:32:00.634161  254175 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1031 00:32:00.634217  254175 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1031 00:32:00.634232  254175 cache.go:56] Caching tarball of preloaded images
	I1031 00:32:00.634372  254175 preload.go:174] Found /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1031 00:32:00.634385  254175 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1031 00:32:00.634525  254175 profile.go:148] Saving config to /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/newest-cni-558362/config.json ...
	I1031 00:32:00.634558  254175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/newest-cni-558362/config.json: {Name:mkbaf70da97b8c388bd0166d5d3f476ee5cc4ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 00:32:00.634804  254175 start.go:365] acquiring machines lock for newest-cni-558362: {Name:mkae4ad3fd2c31b7553c18e3e5d943ac06998c52 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 00:32:00.634857  254175 start.go:369] acquired machines lock for "newest-cni-558362" in 37.053µs
	I1031 00:32:00.634883  254175 start.go:93] Provisioning new machine with config: &{Name:newest-cni-558362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-558362 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1031 00:32:00.634989  254175 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 00:32:00.636835  254175 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 00:32:00.637093  254175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1031 00:32:00.637153  254175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 00:32:00.652588  254175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I1031 00:32:00.653025  254175 main.go:141] libmachine: () Calling .GetVersion
	I1031 00:32:00.653565  254175 main.go:141] libmachine: Using API Version  1
	I1031 00:32:00.653590  254175 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 00:32:00.654887  254175 main.go:141] libmachine: () Calling .GetMachineName
	I1031 00:32:00.655426  254175 main.go:141] libmachine: (newest-cni-558362) Calling .GetMachineName
	I1031 00:32:00.655641  254175 main.go:141] libmachine: (newest-cni-558362) Calling .DriverName
	I1031 00:32:00.655848  254175 start.go:159] libmachine.API.Create for "newest-cni-558362" (driver="kvm2")
	I1031 00:32:00.655903  254175 client.go:168] LocalClient.Create starting
	I1031 00:32:00.655960  254175 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/ca.pem
	I1031 00:32:00.656001  254175 main.go:141] libmachine: Decoding PEM data...
	I1031 00:32:00.656022  254175 main.go:141] libmachine: Parsing certificate...
	I1031 00:32:00.656091  254175 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17527-208817/.minikube/certs/cert.pem
	I1031 00:32:00.656119  254175 main.go:141] libmachine: Decoding PEM data...
	I1031 00:32:00.656137  254175 main.go:141] libmachine: Parsing certificate...
	I1031 00:32:00.656163  254175 main.go:141] libmachine: Running pre-create checks...
	I1031 00:32:00.656177  254175 main.go:141] libmachine: (newest-cni-558362) Calling .PreCreateCheck
	I1031 00:32:00.656552  254175 main.go:141] libmachine: (newest-cni-558362) Calling .GetConfigRaw
	I1031 00:32:00.656981  254175 main.go:141] libmachine: Creating machine...
	I1031 00:32:00.657000  254175 main.go:141] libmachine: (newest-cni-558362) Calling .Create
	I1031 00:32:00.657157  254175 main.go:141] libmachine: (newest-cni-558362) Creating KVM machine...
	I1031 00:32:00.658430  254175 main.go:141] libmachine: (newest-cni-558362) DBG | found existing default KVM network
	I1031 00:32:00.659824  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:00.659611  254198 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:97:44:a3} reservation:<nil>}
	I1031 00:32:00.660688  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:00.660577  254198 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:48:10} reservation:<nil>}
	I1031 00:32:00.661468  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:00.661392  254198 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:df:89} reservation:<nil>}
	I1031 00:32:00.662636  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:00.662554  254198 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00033a560}
	I1031 00:32:00.668607  254175 main.go:141] libmachine: (newest-cni-558362) DBG | trying to create private KVM network mk-newest-cni-558362 192.168.72.0/24...
	I1031 00:32:00.746104  254175 main.go:141] libmachine: (newest-cni-558362) DBG | private KVM network mk-newest-cni-558362 192.168.72.0/24 created
	I1031 00:32:00.746148  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:00.746090  254198 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:32:00.746166  254175 main.go:141] libmachine: (newest-cni-558362) Setting up store path in /home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362 ...
	I1031 00:32:00.746184  254175 main.go:141] libmachine: (newest-cni-558362) Building disk image from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso
	I1031 00:32:00.746364  254175 main.go:141] libmachine: (newest-cni-558362) Downloading /home/jenkins/minikube-integration/17527-208817/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso...
	I1031 00:32:00.985824  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:00.985675  254198 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/id_rsa...
	I1031 00:32:01.087623  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:01.087484  254198 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/newest-cni-558362.rawdisk...
	I1031 00:32:01.087653  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Writing magic tar header
	I1031 00:32:01.087667  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Writing SSH key tar header
	I1031 00:32:01.087677  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:01.087623  254198 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362 ...
	I1031 00:32:01.087866  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362
	I1031 00:32:01.087920  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube/machines
	I1031 00:32:01.087932  254175 main.go:141] libmachine: (newest-cni-558362) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362 (perms=drwx------)
	I1031 00:32:01.087956  254175 main.go:141] libmachine: (newest-cni-558362) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube/machines (perms=drwxr-xr-x)
	I1031 00:32:01.087979  254175 main.go:141] libmachine: (newest-cni-558362) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817/.minikube (perms=drwxr-xr-x)
	I1031 00:32:01.087999  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:32:01.088017  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17527-208817
	I1031 00:32:01.088031  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 00:32:01.088049  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Checking permissions on dir: /home/jenkins
	I1031 00:32:01.088064  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Checking permissions on dir: /home
	I1031 00:32:01.088082  254175 main.go:141] libmachine: (newest-cni-558362) Setting executable bit set on /home/jenkins/minikube-integration/17527-208817 (perms=drwxrwxr-x)
	I1031 00:32:01.088100  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Skipping /home - not owner
	I1031 00:32:01.088117  254175 main.go:141] libmachine: (newest-cni-558362) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 00:32:01.088140  254175 main.go:141] libmachine: (newest-cni-558362) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 00:32:01.088151  254175 main.go:141] libmachine: (newest-cni-558362) Creating domain...
	I1031 00:32:01.089115  254175 main.go:141] libmachine: (newest-cni-558362) define libvirt domain using xml: 
	I1031 00:32:01.089143  254175 main.go:141] libmachine: (newest-cni-558362) <domain type='kvm'>
	I1031 00:32:01.089157  254175 main.go:141] libmachine: (newest-cni-558362)   <name>newest-cni-558362</name>
	I1031 00:32:01.089167  254175 main.go:141] libmachine: (newest-cni-558362)   <memory unit='MiB'>2200</memory>
	I1031 00:32:01.089182  254175 main.go:141] libmachine: (newest-cni-558362)   <vcpu>2</vcpu>
	I1031 00:32:01.089201  254175 main.go:141] libmachine: (newest-cni-558362)   <features>
	I1031 00:32:01.089212  254175 main.go:141] libmachine: (newest-cni-558362)     <acpi/>
	I1031 00:32:01.089225  254175 main.go:141] libmachine: (newest-cni-558362)     <apic/>
	I1031 00:32:01.089236  254175 main.go:141] libmachine: (newest-cni-558362)     <pae/>
	I1031 00:32:01.089247  254175 main.go:141] libmachine: (newest-cni-558362)     
	I1031 00:32:01.089283  254175 main.go:141] libmachine: (newest-cni-558362)   </features>
	I1031 00:32:01.089311  254175 main.go:141] libmachine: (newest-cni-558362)   <cpu mode='host-passthrough'>
	I1031 00:32:01.089332  254175 main.go:141] libmachine: (newest-cni-558362)   
	I1031 00:32:01.089349  254175 main.go:141] libmachine: (newest-cni-558362)   </cpu>
	I1031 00:32:01.089376  254175 main.go:141] libmachine: (newest-cni-558362)   <os>
	I1031 00:32:01.089390  254175 main.go:141] libmachine: (newest-cni-558362)     <type>hvm</type>
	I1031 00:32:01.089399  254175 main.go:141] libmachine: (newest-cni-558362)     <boot dev='cdrom'/>
	I1031 00:32:01.089411  254175 main.go:141] libmachine: (newest-cni-558362)     <boot dev='hd'/>
	I1031 00:32:01.089428  254175 main.go:141] libmachine: (newest-cni-558362)     <bootmenu enable='no'/>
	I1031 00:32:01.089435  254175 main.go:141] libmachine: (newest-cni-558362)   </os>
	I1031 00:32:01.089441  254175 main.go:141] libmachine: (newest-cni-558362)   <devices>
	I1031 00:32:01.089447  254175 main.go:141] libmachine: (newest-cni-558362)     <disk type='file' device='cdrom'>
	I1031 00:32:01.089456  254175 main.go:141] libmachine: (newest-cni-558362)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/boot2docker.iso'/>
	I1031 00:32:01.089465  254175 main.go:141] libmachine: (newest-cni-558362)       <target dev='hdc' bus='scsi'/>
	I1031 00:32:01.089471  254175 main.go:141] libmachine: (newest-cni-558362)       <readonly/>
	I1031 00:32:01.089479  254175 main.go:141] libmachine: (newest-cni-558362)     </disk>
	I1031 00:32:01.089486  254175 main.go:141] libmachine: (newest-cni-558362)     <disk type='file' device='disk'>
	I1031 00:32:01.089495  254175 main.go:141] libmachine: (newest-cni-558362)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 00:32:01.089504  254175 main.go:141] libmachine: (newest-cni-558362)       <source file='/home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/newest-cni-558362.rawdisk'/>
	I1031 00:32:01.089521  254175 main.go:141] libmachine: (newest-cni-558362)       <target dev='hda' bus='virtio'/>
	I1031 00:32:01.089529  254175 main.go:141] libmachine: (newest-cni-558362)     </disk>
	I1031 00:32:01.089537  254175 main.go:141] libmachine: (newest-cni-558362)     <interface type='network'>
	I1031 00:32:01.089546  254175 main.go:141] libmachine: (newest-cni-558362)       <source network='mk-newest-cni-558362'/>
	I1031 00:32:01.089552  254175 main.go:141] libmachine: (newest-cni-558362)       <model type='virtio'/>
	I1031 00:32:01.089560  254175 main.go:141] libmachine: (newest-cni-558362)     </interface>
	I1031 00:32:01.089595  254175 main.go:141] libmachine: (newest-cni-558362)     <interface type='network'>
	I1031 00:32:01.089622  254175 main.go:141] libmachine: (newest-cni-558362)       <source network='default'/>
	I1031 00:32:01.089643  254175 main.go:141] libmachine: (newest-cni-558362)       <model type='virtio'/>
	I1031 00:32:01.089662  254175 main.go:141] libmachine: (newest-cni-558362)     </interface>
	I1031 00:32:01.089673  254175 main.go:141] libmachine: (newest-cni-558362)     <serial type='pty'>
	I1031 00:32:01.089678  254175 main.go:141] libmachine: (newest-cni-558362)       <target port='0'/>
	I1031 00:32:01.089687  254175 main.go:141] libmachine: (newest-cni-558362)     </serial>
	I1031 00:32:01.089692  254175 main.go:141] libmachine: (newest-cni-558362)     <console type='pty'>
	I1031 00:32:01.089701  254175 main.go:141] libmachine: (newest-cni-558362)       <target type='serial' port='0'/>
	I1031 00:32:01.089707  254175 main.go:141] libmachine: (newest-cni-558362)     </console>
	I1031 00:32:01.089716  254175 main.go:141] libmachine: (newest-cni-558362)     <rng model='virtio'>
	I1031 00:32:01.089723  254175 main.go:141] libmachine: (newest-cni-558362)       <backend model='random'>/dev/random</backend>
	I1031 00:32:01.089755  254175 main.go:141] libmachine: (newest-cni-558362)     </rng>
	I1031 00:32:01.089783  254175 main.go:141] libmachine: (newest-cni-558362)     
	I1031 00:32:01.089799  254175 main.go:141] libmachine: (newest-cni-558362)     
	I1031 00:32:01.089813  254175 main.go:141] libmachine: (newest-cni-558362)   </devices>
	I1031 00:32:01.089829  254175 main.go:141] libmachine: (newest-cni-558362) </domain>
	I1031 00:32:01.089845  254175 main.go:141] libmachine: (newest-cni-558362) 
	I1031 00:32:01.094253  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:16:ba:f0 in network default
	I1031 00:32:01.094849  254175 main.go:141] libmachine: (newest-cni-558362) Ensuring networks are active...
	I1031 00:32:01.094878  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:01.095559  254175 main.go:141] libmachine: (newest-cni-558362) Ensuring network default is active
	I1031 00:32:01.095842  254175 main.go:141] libmachine: (newest-cni-558362) Ensuring network mk-newest-cni-558362 is active
	I1031 00:32:01.096355  254175 main.go:141] libmachine: (newest-cni-558362) Getting domain xml...
	I1031 00:32:01.097148  254175 main.go:141] libmachine: (newest-cni-558362) Creating domain...
	I1031 00:32:02.394730  254175 main.go:141] libmachine: (newest-cni-558362) Waiting to get IP...
	I1031 00:32:02.395805  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:02.396467  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:02.396499  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:02.396417  254198 retry.go:31] will retry after 292.543216ms: waiting for machine to come up
	I1031 00:32:02.691178  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:02.691713  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:02.691756  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:02.691674  254198 retry.go:31] will retry after 261.349412ms: waiting for machine to come up
	I1031 00:32:02.955346  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:02.955883  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:02.955927  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:02.955845  254198 retry.go:31] will retry after 455.659374ms: waiting for machine to come up
	I1031 00:32:03.413125  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:03.413608  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:03.413642  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:03.413557  254198 retry.go:31] will retry after 402.610173ms: waiting for machine to come up
	I1031 00:32:03.818328  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:03.818750  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:03.818777  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:03.818713  254198 retry.go:31] will retry after 560.293204ms: waiting for machine to come up
	I1031 00:32:04.380469  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:04.381001  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:04.381030  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:04.380949  254198 retry.go:31] will retry after 768.643463ms: waiting for machine to come up
	I1031 00:32:05.151779  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:05.152222  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:05.152265  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:05.152187  254198 retry.go:31] will retry after 1.131269272s: waiting for machine to come up
	I1031 00:32:06.284534  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:06.284926  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:06.284969  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:06.284880  254198 retry.go:31] will retry after 1.046259693s: waiting for machine to come up
	I1031 00:32:07.332918  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:07.333382  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:07.333414  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:07.333329  254198 retry.go:31] will retry after 1.475920828s: waiting for machine to come up
	I1031 00:32:08.811496  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:08.812121  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:08.812151  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:08.812081  254198 retry.go:31] will retry after 1.614303073s: waiting for machine to come up
	I1031 00:32:10.428482  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:10.428993  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:10.429029  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:10.428934  254198 retry.go:31] will retry after 2.547252229s: waiting for machine to come up
	I1031 00:32:12.978045  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:12.978538  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:12.978568  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:12.978493  254198 retry.go:31] will retry after 3.504850455s: waiting for machine to come up
	I1031 00:32:16.485275  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:16.485745  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:16.485769  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:16.485705  254198 retry.go:31] will retry after 4.173086273s: waiting for machine to come up
	I1031 00:32:20.660265  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:20.660726  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find current IP address of domain newest-cni-558362 in network mk-newest-cni-558362
	I1031 00:32:20.660750  254175 main.go:141] libmachine: (newest-cni-558362) DBG | I1031 00:32:20.660682  254198 retry.go:31] will retry after 4.485543556s: waiting for machine to come up
	I1031 00:32:25.151338  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:25.151833  254175 main.go:141] libmachine: (newest-cni-558362) Found IP for machine: 192.168.72.163
	I1031 00:32:25.151868  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has current primary IP address 192.168.72.163 and MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:25.151878  254175 main.go:141] libmachine: (newest-cni-558362) Reserving static IP address...
	I1031 00:32:25.152185  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find host DHCP lease matching {name: "newest-cni-558362", mac: "52:54:00:41:0f:39", ip: "192.168.72.163"} in network mk-newest-cni-558362
	I1031 00:32:25.234670  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Getting to WaitForSSH function...
	I1031 00:32:25.234707  254175 main.go:141] libmachine: (newest-cni-558362) Reserved static IP address: 192.168.72.163
	I1031 00:32:25.234760  254175 main.go:141] libmachine: (newest-cni-558362) Waiting for SSH to be available...
	I1031 00:32:25.237584  254175 main.go:141] libmachine: (newest-cni-558362) DBG | domain newest-cni-558362 has defined MAC address 52:54:00:41:0f:39 in network mk-newest-cni-558362
	I1031 00:32:25.237997  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:41:0f:39", ip: ""} in network mk-newest-cni-558362
	I1031 00:32:25.238024  254175 main.go:141] libmachine: (newest-cni-558362) DBG | unable to find defined IP address of network mk-newest-cni-558362 interface with MAC address 52:54:00:41:0f:39
	I1031 00:32:25.238168  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Using SSH client type: external
	I1031 00:32:25.238204  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Using SSH private key: /home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/id_rsa (-rw-------)
	I1031 00:32:25.238239  254175 main.go:141] libmachine: (newest-cni-558362) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17527-208817/.minikube/machines/newest-cni-558362/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 00:32:25.238259  254175 main.go:141] libmachine: (newest-cni-558362) DBG | About to run SSH command:
	I1031 00:32:25.238273  254175 main.go:141] libmachine: (newest-cni-558362) DBG | exit 0
	I1031 00:32:25.242230  254175 main.go:141] libmachine: (newest-cni-558362) DBG | SSH cmd err, output: exit status 255: 
	I1031 00:32:25.242256  254175 main.go:141] libmachine: (newest-cni-558362) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1031 00:32:25.242265  254175 main.go:141] libmachine: (newest-cni-558362) DBG | command : exit 0
	I1031 00:32:25.242275  254175 main.go:141] libmachine: (newest-cni-558362) DBG | err     : exit status 255
	I1031 00:32:25.242283  254175 main.go:141] libmachine: (newest-cni-558362) DBG | output  : 
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-31 00:12:05 UTC, ends at Tue 2023-10-31 00:32:26 UTC. --
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.569777079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712346569764886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=980412c6-0916-4597-b2ca-8badd7aa21fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.570664434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f181d7e-a85d-45aa-87f0-9e304854f249 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.570713212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f181d7e-a85d-45aa-87f0-9e304854f249 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.570895562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07,PodSandboxId:812c17a71bef27cd1a4b5e6e267981abad85c7899ec1142462a56f979fc80069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698711467881960246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf2b5d0-1773-4ee6-882d-daff300f9d80,},Annotations:map[string]string{io.kubernetes.container.hash: 8b11db42,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373,PodSandboxId:df3a07191232d109244e31a29145f55fc6065949a6f00882fd5d0a8a1494b444,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698711467605206305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc67cf4-4a59-42bf-a6ca-b2be409f5077,},Annotations:map[string]string{io.kubernetes.container.hash: 3be23bff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e,PodSandboxId:7293c197a03b3201abc827276f5ea75d4abe60534d11435b0fed383dd4ea9771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698711467061316230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gp6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7086342-a1ed-42b3-819a-ad7d8211ad17,},Annotations:map[string]string{io.kubernetes.container.hash: 5ee357d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c,PodSandboxId:d273e52b8919ce1f86ecb6ffc378b1a2966c7436139bbe047ea9e12bd95c38b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698711443858345748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
78de84bf9e4cea78d031c625cd991114,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3,PodSandboxId:c51a7b199e872c10c757926de1fbcc7f35b35879896087c54e04905a9b99fff3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698711443768156345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d006c17ee88c57b42e8328304b6f774,},Annotations:map[
string]string{io.kubernetes.container.hash: 3cd2a05e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb,PodSandboxId:9912485c08eacbf8a42dd77186c2a7efc211ed49abfd27f8d71f3eb36b66e3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698711443691928392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ea8799ec6c67cdc310b5507b
f1e01d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850,PodSandboxId:cfb58aefd8cc0020511742f06ffe0d99edd92ea63fed0214e636944b75b4beb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698711443374523373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c464abba4e6ceb32924cfebc2fc059e7,},An
notations:map[string]string{io.kubernetes.container.hash: 362a7add,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f181d7e-a85d-45aa-87f0-9e304854f249 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.617335896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3072ec38-c169-43af-a7ed-c9d91e02be5f name=/runtime.v1.RuntimeService/Version
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.617487013Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3072ec38-c169-43af-a7ed-c9d91e02be5f name=/runtime.v1.RuntimeService/Version
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.618937429Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=421957d0-3ca3-40ee-917e-822647795657 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.619616994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712346619602942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=421957d0-3ca3-40ee-917e-822647795657 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.620471072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=abb29758-56fb-4380-bb4f-127d689edc45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.620538554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=abb29758-56fb-4380-bb4f-127d689edc45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.620715454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07,PodSandboxId:812c17a71bef27cd1a4b5e6e267981abad85c7899ec1142462a56f979fc80069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698711467881960246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf2b5d0-1773-4ee6-882d-daff300f9d80,},Annotations:map[string]string{io.kubernetes.container.hash: 8b11db42,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373,PodSandboxId:df3a07191232d109244e31a29145f55fc6065949a6f00882fd5d0a8a1494b444,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698711467605206305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc67cf4-4a59-42bf-a6ca-b2be409f5077,},Annotations:map[string]string{io.kubernetes.container.hash: 3be23bff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e,PodSandboxId:7293c197a03b3201abc827276f5ea75d4abe60534d11435b0fed383dd4ea9771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698711467061316230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gp6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7086342-a1ed-42b3-819a-ad7d8211ad17,},Annotations:map[string]string{io.kubernetes.container.hash: 5ee357d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c,PodSandboxId:d273e52b8919ce1f86ecb6ffc378b1a2966c7436139bbe047ea9e12bd95c38b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698711443858345748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
78de84bf9e4cea78d031c625cd991114,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3,PodSandboxId:c51a7b199e872c10c757926de1fbcc7f35b35879896087c54e04905a9b99fff3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698711443768156345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d006c17ee88c57b42e8328304b6f774,},Annotations:map[
string]string{io.kubernetes.container.hash: 3cd2a05e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb,PodSandboxId:9912485c08eacbf8a42dd77186c2a7efc211ed49abfd27f8d71f3eb36b66e3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698711443691928392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ea8799ec6c67cdc310b5507b
f1e01d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850,PodSandboxId:cfb58aefd8cc0020511742f06ffe0d99edd92ea63fed0214e636944b75b4beb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698711443374523373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c464abba4e6ceb32924cfebc2fc059e7,},An
notations:map[string]string{io.kubernetes.container.hash: 362a7add,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=abb29758-56fb-4380-bb4f-127d689edc45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.662967626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a2bc3715-656c-4be1-9835-6981fbfc2a49 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.663135132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a2bc3715-656c-4be1-9835-6981fbfc2a49 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.664585134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e0c2d5cd-78b3-459e-aff3-de5b2d5d7319 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.664927486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712346664914547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e0c2d5cd-78b3-459e-aff3-de5b2d5d7319 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.665559017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a61dc10-699e-4849-b46b-75d60714f474 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.665633442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a61dc10-699e-4849-b46b-75d60714f474 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.665795045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07,PodSandboxId:812c17a71bef27cd1a4b5e6e267981abad85c7899ec1142462a56f979fc80069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698711467881960246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf2b5d0-1773-4ee6-882d-daff300f9d80,},Annotations:map[string]string{io.kubernetes.container.hash: 8b11db42,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373,PodSandboxId:df3a07191232d109244e31a29145f55fc6065949a6f00882fd5d0a8a1494b444,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698711467605206305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc67cf4-4a59-42bf-a6ca-b2be409f5077,},Annotations:map[string]string{io.kubernetes.container.hash: 3be23bff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e,PodSandboxId:7293c197a03b3201abc827276f5ea75d4abe60534d11435b0fed383dd4ea9771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698711467061316230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gp6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7086342-a1ed-42b3-819a-ad7d8211ad17,},Annotations:map[string]string{io.kubernetes.container.hash: 5ee357d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c,PodSandboxId:d273e52b8919ce1f86ecb6ffc378b1a2966c7436139bbe047ea9e12bd95c38b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698711443858345748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
78de84bf9e4cea78d031c625cd991114,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3,PodSandboxId:c51a7b199e872c10c757926de1fbcc7f35b35879896087c54e04905a9b99fff3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698711443768156345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d006c17ee88c57b42e8328304b6f774,},Annotations:map[
string]string{io.kubernetes.container.hash: 3cd2a05e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb,PodSandboxId:9912485c08eacbf8a42dd77186c2a7efc211ed49abfd27f8d71f3eb36b66e3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698711443691928392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ea8799ec6c67cdc310b5507b
f1e01d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850,PodSandboxId:cfb58aefd8cc0020511742f06ffe0d99edd92ea63fed0214e636944b75b4beb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698711443374523373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c464abba4e6ceb32924cfebc2fc059e7,},An
notations:map[string]string{io.kubernetes.container.hash: 362a7add,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a61dc10-699e-4849-b46b-75d60714f474 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.703418250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=74cae6d8-8faf-4adb-9202-ca4928956196 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.703507641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=74cae6d8-8faf-4adb-9202-ca4928956196 name=/runtime.v1.RuntimeService/Version
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.704440652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=716d18ea-90e5-48f0-94a9-2ff3672a8ffb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.704788040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698712346704775798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=716d18ea-90e5-48f0-94a9-2ff3672a8ffb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.705281925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=29f1f230-8e49-4ff0-addf-1a220b565387 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.705354653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=29f1f230-8e49-4ff0-addf-1a220b565387 name=/runtime.v1.RuntimeService/ListContainers
	Oct 31 00:32:26 no-preload-640155 crio[712]: time="2023-10-31 00:32:26.705507029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07,PodSandboxId:812c17a71bef27cd1a4b5e6e267981abad85c7899ec1142462a56f979fc80069,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698711467881960246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf2b5d0-1773-4ee6-882d-daff300f9d80,},Annotations:map[string]string{io.kubernetes.container.hash: 8b11db42,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373,PodSandboxId:df3a07191232d109244e31a29145f55fc6065949a6f00882fd5d0a8a1494b444,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698711467605206305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cc67cf4-4a59-42bf-a6ca-b2be409f5077,},Annotations:map[string]string{io.kubernetes.container.hash: 3be23bff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e,PodSandboxId:7293c197a03b3201abc827276f5ea75d4abe60534d11435b0fed383dd4ea9771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698711467061316230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gp6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7086342-a1ed-42b3-819a-ad7d8211ad17,},Annotations:map[string]string{io.kubernetes.container.hash: 5ee357d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c,PodSandboxId:d273e52b8919ce1f86ecb6ffc378b1a2966c7436139bbe047ea9e12bd95c38b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698711443858345748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
78de84bf9e4cea78d031c625cd991114,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3,PodSandboxId:c51a7b199e872c10c757926de1fbcc7f35b35879896087c54e04905a9b99fff3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698711443768156345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d006c17ee88c57b42e8328304b6f774,},Annotations:map[
string]string{io.kubernetes.container.hash: 3cd2a05e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb,PodSandboxId:9912485c08eacbf8a42dd77186c2a7efc211ed49abfd27f8d71f3eb36b66e3bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698711443691928392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ea8799ec6c67cdc310b5507b
f1e01d,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850,PodSandboxId:cfb58aefd8cc0020511742f06ffe0d99edd92ea63fed0214e636944b75b4beb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698711443374523373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-640155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c464abba4e6ceb32924cfebc2fc059e7,},An
notations:map[string]string{io.kubernetes.container.hash: 362a7add,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=29f1f230-8e49-4ff0-addf-1a220b565387 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd92760f1aa1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   812c17a71bef2       storage-provisioner
	744ec7366f8a7       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   14 minutes ago      Running             kube-proxy                0                   df3a07191232d       kube-proxy-pkjsl
	12e3e0eb3fa0f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   7293c197a03b3       coredns-5dd5756b68-gp6pj
	6fe9c6ea686cf       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   15 minutes ago      Running             kube-scheduler            2                   d273e52b8919c       kube-scheduler-no-preload-640155
	07e6ccb405f57       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   c51a7b199e872       etcd-no-preload-640155
	d106e63a6e40b       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   15 minutes ago      Running             kube-controller-manager   2                   9912485c08eac       kube-controller-manager-no-preload-640155
	d99088bf7c1d1       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   15 minutes ago      Running             kube-apiserver            2                   cfb58aefd8cc0       kube-apiserver-no-preload-640155
	
	* 
	* ==> coredns [12e3e0eb3fa0f00291170616ab5348ad05efa5aefd3c2cf3adb57b8fed85068e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38278 - 17632 "HINFO IN 7134557370839004967.5240026344512166091. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009358349s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-640155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-640155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=462855d35e0791a9ef0dc759d2782e987ae8f7f4
	                    minikube.k8s.io/name=no-preload-640155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T00_17_31_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 00:17:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-640155
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 00:32:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 00:28:02 +0000   Tue, 31 Oct 2023 00:17:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 00:28:02 +0000   Tue, 31 Oct 2023 00:17:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 00:28:02 +0000   Tue, 31 Oct 2023 00:17:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 00:28:02 +0000   Tue, 31 Oct 2023 00:17:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.168
	  Hostname:    no-preload-640155
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 84caacece5d34fe39253fe3dd5ba85a5
	  System UUID:                84caacec-e5d3-4fe3-9253-fe3dd5ba85a5
	  Boot ID:                    1aa16f0e-0a43-4159-a950-eda4d1a7a374
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gp6pj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-640155                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-640155             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-640155    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-pkjsl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-640155             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-d2xg4              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-640155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-640155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-640155 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node no-preload-640155 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node no-preload-640155 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-640155 event: Registered Node no-preload-640155 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct31 00:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068936] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct31 00:12] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.504771] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156790] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.454130] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.335244] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.118128] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.159891] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.117833] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.216188] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +31.156647] systemd-fstab-generator[1269]: Ignoring "noauto" for root device
	[Oct31 00:13] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 00:17] systemd-fstab-generator[3874]: Ignoring "noauto" for root device
	[  +9.278826] systemd-fstab-generator[4215]: Ignoring "noauto" for root device
	[ +14.932487] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [07e6ccb405f579c3b27a90e76550f0c23342766e0132978ad6a59d060d415da3] <==
	* {"level":"info","ts":"2023-10-31T00:17:25.199612Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T00:17:26.012733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T00:17:26.012823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T00:17:26.012854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b received MsgPreVoteResp from 81aa8b2870c4e31b at term 1"}
	{"level":"info","ts":"2023-10-31T00:17:26.012876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T00:17:26.012882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b received MsgVoteResp from 81aa8b2870c4e31b at term 2"}
	{"level":"info","ts":"2023-10-31T00:17:26.012891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81aa8b2870c4e31b became leader at term 2"}
	{"level":"info","ts":"2023-10-31T00:17:26.012899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81aa8b2870c4e31b elected leader 81aa8b2870c4e31b at term 2"}
	{"level":"info","ts":"2023-10-31T00:17:26.014714Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:17:26.014987Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"81aa8b2870c4e31b","local-member-attributes":"{Name:no-preload-640155 ClientURLs:[https://192.168.61.168:2379]}","request-path":"/0/members/81aa8b2870c4e31b/attributes","cluster-id":"a8447026812d6081","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T00:17:26.015228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:17:26.015947Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a8447026812d6081","local-member-id":"81aa8b2870c4e31b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:17:26.016177Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:17:26.016231Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T00:17:26.016916Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T00:17:26.017135Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T00:17:26.018199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.168:2379"}
	{"level":"info","ts":"2023-10-31T00:17:26.030085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T00:17:26.030135Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T00:27:26.051658Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":680}
	{"level":"info","ts":"2023-10-31T00:27:26.055675Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":680,"took":"2.921832ms","hash":321229060}
	{"level":"info","ts":"2023-10-31T00:27:26.05581Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":321229060,"revision":680,"compact-revision":-1}
	{"level":"info","ts":"2023-10-31T00:32:26.060316Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":923}
	{"level":"info","ts":"2023-10-31T00:32:26.063542Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":923,"took":"2.548142ms","hash":1022113991}
	{"level":"info","ts":"2023-10-31T00:32:26.063618Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1022113991,"revision":923,"compact-revision":680}
	
	* 
	* ==> kernel <==
	*  00:32:27 up 20 min,  0 users,  load average: 0.12, 0.23, 0.20
	Linux no-preload-640155 5.10.57 #1 SMP Mon Oct 30 21:42:24 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d99088bf7c1d17ccac8994d1d3ba03fb0cf295d8fb4f499e7f840c68a35f4850] <==
	* W1031 00:27:28.838952       1 handler_proxy.go:93] no RequestInfo found in the context
	W1031 00:27:28.839126       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:27:28.839518       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:27:28.839560       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1031 00:27:28.839679       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:27:28.841188       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:28:27.701323       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:28:28.840438       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:28:28.840606       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:28:28.840660       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:28:28.841764       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:28:28.841871       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:28:28.841911       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:29:27.701798       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1031 00:30:27.701691       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1031 00:30:28.841152       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:30:28.841347       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1031 00:30:28.841413       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1031 00:30:28.842235       1 handler_proxy.go:93] no RequestInfo found in the context
	E1031 00:30:28.842420       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 00:30:28.842465       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 00:31:27.701721       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [d106e63a6e40b143b2a05c79a8edffd585e50b8e53d0c0b22a773fb7c3ac80cb] <==
	* I1031 00:26:44.095339       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:27:13.577776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:27:14.105228       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:27:43.584244       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:27:44.117360       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:28:13.590776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:28:14.126659       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:28:43.598116       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:28:44.135624       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1031 00:28:48.327697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="247.497µs"
	I1031 00:28:59.324609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="692.849µs"
	E1031 00:29:13.604728       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:29:14.145667       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:29:43.610698       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:29:44.159164       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:30:13.616483       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:30:14.169112       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:30:43.623944       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:30:44.179903       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:31:13.630325       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:31:14.189866       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:31:43.637130       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:31:44.207253       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1031 00:32:13.644323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1031 00:32:14.218210       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [744ec7366f8a7eaafbd0ffcd9cb919aa06e09a2a09e7c33e722082aa642be373] <==
	* I1031 00:17:47.834527       1 server_others.go:69] "Using iptables proxy"
	I1031 00:17:47.855493       1 node.go:141] Successfully retrieved node IP: 192.168.61.168
	I1031 00:17:47.935439       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 00:17:47.935514       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 00:17:47.940593       1 server_others.go:152] "Using iptables Proxier"
	I1031 00:17:47.940682       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 00:17:47.940858       1 server.go:846] "Version info" version="v1.28.3"
	I1031 00:17:47.940871       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 00:17:47.942800       1 config.go:188] "Starting service config controller"
	I1031 00:17:47.942899       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 00:17:47.942960       1 config.go:97] "Starting endpoint slice config controller"
	I1031 00:17:47.942965       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 00:17:47.944798       1 config.go:315] "Starting node config controller"
	I1031 00:17:47.944834       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 00:17:48.046493       1 shared_informer.go:318] Caches are synced for node config
	I1031 00:17:48.046556       1 shared_informer.go:318] Caches are synced for service config
	I1031 00:17:48.046580       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [6fe9c6ea686cf691409c6c16c1852787dc07e8489f21a95887cb6119de9c169c] <==
	* W1031 00:17:28.673793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 00:17:28.674084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 00:17:28.733245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1031 00:17:28.733326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1031 00:17:28.790715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 00:17:28.790772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 00:17:28.807938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 00:17:28.808088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1031 00:17:28.855280       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 00:17:28.855405       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 00:17:28.942532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 00:17:28.942612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1031 00:17:29.003128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 00:17:29.003184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1031 00:17:29.039315       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 00:17:29.039375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 00:17:29.073222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 00:17:29.073285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1031 00:17:29.096862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 00:17:29.096926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 00:17:29.123347       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 00:17:29.123442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 00:17:29.155633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 00:17:29.155727       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1031 00:17:31.732149       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 00:12:05 UTC, ends at Tue 2023-10-31 00:32:27 UTC. --
	Oct 31 00:29:31 no-preload-640155 kubelet[4222]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:29:31 no-preload-640155 kubelet[4222]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:29:31 no-preload-640155 kubelet[4222]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:29:36 no-preload-640155 kubelet[4222]: E1031 00:29:36.302931    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:29:51 no-preload-640155 kubelet[4222]: E1031 00:29:51.303849    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:05 no-preload-640155 kubelet[4222]: E1031 00:30:05.303886    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:20 no-preload-640155 kubelet[4222]: E1031 00:30:20.304231    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:31 no-preload-640155 kubelet[4222]: E1031 00:30:31.386677    4222 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:30:31 no-preload-640155 kubelet[4222]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:30:31 no-preload-640155 kubelet[4222]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:30:31 no-preload-640155 kubelet[4222]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:30:34 no-preload-640155 kubelet[4222]: E1031 00:30:34.303136    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:47 no-preload-640155 kubelet[4222]: E1031 00:30:47.304222    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:30:58 no-preload-640155 kubelet[4222]: E1031 00:30:58.303755    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:31:09 no-preload-640155 kubelet[4222]: E1031 00:31:09.303338    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:31:21 no-preload-640155 kubelet[4222]: E1031 00:31:21.303276    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:31:31 no-preload-640155 kubelet[4222]: E1031 00:31:31.382420    4222 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 00:31:31 no-preload-640155 kubelet[4222]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 00:31:31 no-preload-640155 kubelet[4222]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 00:31:31 no-preload-640155 kubelet[4222]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 00:31:36 no-preload-640155 kubelet[4222]: E1031 00:31:36.303628    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:31:51 no-preload-640155 kubelet[4222]: E1031 00:31:51.304438    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:32:03 no-preload-640155 kubelet[4222]: E1031 00:32:03.304156    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:32:14 no-preload-640155 kubelet[4222]: E1031 00:32:14.303779    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	Oct 31 00:32:26 no-preload-640155 kubelet[4222]: E1031 00:32:26.304139    4222 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-d2xg4" podUID="b16ae9e6-6deb-485f-af5c-35cafada4a39"
	
	* 
	* ==> storage-provisioner [bd92760f1aa1b4167a6435d12152fcc7cc12f73d49ef857c56e9f3d62ff2cb07] <==
	* I1031 00:17:48.022794       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 00:17:48.057909       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 00:17:48.058394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 00:17:48.072953       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 00:17:48.073446       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-640155_048d4a56-83f9-4317-b90e-c2bc17b7da39!
	I1031 00:17:48.079689       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa96ec76-f883-44a6-a949-cebaf07baf8e", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-640155_048d4a56-83f9-4317-b90e-c2bc17b7da39 became leader
	I1031 00:17:48.175433       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-640155_048d4a56-83f9-4317-b90e-c2bc17b7da39!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-640155 -n no-preload-640155
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-640155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-d2xg4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-640155 describe pod metrics-server-57f55c9bc5-d2xg4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-640155 describe pod metrics-server-57f55c9bc5-d2xg4: exit status 1 (63.468201ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-d2xg4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-640155 describe pod metrics-server-57f55c9bc5-d2xg4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (60.54s)

                                                
                                    

Test pass (227/292)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 20
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 4.94
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.58
20 TestOffline 93.23
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 146.66
27 TestAddons/parallel/Registry 15.56
29 TestAddons/parallel/InspektorGadget 10.87
30 TestAddons/parallel/MetricsServer 6.29
31 TestAddons/parallel/HelmTiller 10.79
33 TestAddons/parallel/CSI 73.52
34 TestAddons/parallel/Headlamp 15.55
35 TestAddons/parallel/CloudSpanner 5.78
36 TestAddons/parallel/LocalPath 54.52
37 TestAddons/parallel/NvidiaDevicePlugin 5.67
40 TestAddons/serial/GCPAuth/Namespaces 0.12
42 TestCertOptions 81.18
43 TestCertExpiration 286.63
45 TestForceSystemdFlag 81.44
46 TestForceSystemdEnv 68.27
48 TestKVMDriverInstallOrUpdate 1.27
52 TestErrorSpam/setup 45.67
53 TestErrorSpam/start 0.38
54 TestErrorSpam/status 0.82
55 TestErrorSpam/pause 1.66
56 TestErrorSpam/unpause 1.76
57 TestErrorSpam/stop 2.26
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 60.07
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 56.38
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
69 TestFunctional/serial/CacheCmd/cache/add_local 1.12
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 38.2
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.54
80 TestFunctional/serial/LogsFileCmd 1.55
81 TestFunctional/serial/InvalidService 4.5
83 TestFunctional/parallel/ConfigCmd 0.44
84 TestFunctional/parallel/DashboardCmd 20.45
85 TestFunctional/parallel/DryRun 0.45
86 TestFunctional/parallel/InternationalLanguage 0.17
87 TestFunctional/parallel/StatusCmd 1.23
91 TestFunctional/parallel/ServiceCmdConnect 14.55
92 TestFunctional/parallel/AddonsCmd 0.19
93 TestFunctional/parallel/PersistentVolumeClaim 38.67
95 TestFunctional/parallel/SSHCmd 0.53
96 TestFunctional/parallel/CpCmd 1.05
97 TestFunctional/parallel/MySQL 29.47
98 TestFunctional/parallel/FileSync 0.25
99 TestFunctional/parallel/CertSync 1.66
103 TestFunctional/parallel/NodeLabels 0.08
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
107 TestFunctional/parallel/License 0.5
108 TestFunctional/parallel/Version/short 0.07
109 TestFunctional/parallel/Version/components 0.59
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
115 TestFunctional/parallel/ImageCommands/Setup 1.02
116 TestFunctional/parallel/ServiceCmd/DeployApp 15.23
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.8
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.84
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.27
129 TestFunctional/parallel/ServiceCmd/List 0.35
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
132 TestFunctional/parallel/ServiceCmd/Format 0.35
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
134 TestFunctional/parallel/ServiceCmd/URL 0.34
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.74
136 TestFunctional/parallel/ProfileCmd/profile_list 0.35
137 TestFunctional/parallel/MountCmd/any-port 10.1
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.23
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.19
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
145 TestFunctional/parallel/MountCmd/specific-port 2.17
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
147 TestFunctional/delete_addon-resizer_images 0.07
148 TestFunctional/delete_my-image_image 0.02
149 TestFunctional/delete_minikube_cached_images 0.02
153 TestIngressAddonLegacy/StartLegacyK8sCluster 103.78
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.02
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.56
160 TestJSONOutput/start/Command 98.01
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.69
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.63
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 7.11
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.23
188 TestMainNoArgs 0.06
189 TestMinikubeProfile 104.73
192 TestMountStart/serial/StartWithMountFirst 28.46
193 TestMountStart/serial/VerifyMountFirst 0.41
194 TestMountStart/serial/StartWithMountSecond 28.05
195 TestMountStart/serial/VerifyMountSecond 0.42
196 TestMountStart/serial/DeleteFirst 0.88
197 TestMountStart/serial/VerifyMountPostDelete 0.43
198 TestMountStart/serial/Stop 1.26
199 TestMountStart/serial/RestartStopped 21.87
200 TestMountStart/serial/VerifyMountPostStop 0.41
203 TestMultiNode/serial/FreshStart2Nodes 109.13
204 TestMultiNode/serial/DeployApp2Nodes 3.91
206 TestMultiNode/serial/AddNode 40.74
207 TestMultiNode/serial/ProfileList 0.22
208 TestMultiNode/serial/CopyFile 7.98
209 TestMultiNode/serial/StopNode 3
210 TestMultiNode/serial/StartAfterStop 28.7
212 TestMultiNode/serial/DeleteNode 1.82
214 TestMultiNode/serial/RestartMultiNode 447.26
215 TestMultiNode/serial/ValidateNameConflict 49.24
222 TestScheduledStopUnix 116.74
228 TestKubernetesUpgrade 184.21
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
232 TestNoKubernetes/serial/StartWithK8s 106.15
241 TestPause/serial/Start 77.68
242 TestNoKubernetes/serial/StartWithStopK8s 41.45
243 TestNoKubernetes/serial/Start 31.16
244 TestStoppedBinaryUpgrade/Setup 0.38
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
248 TestNoKubernetes/serial/ProfileList 2.4
249 TestNoKubernetes/serial/Stop 2.3
250 TestNoKubernetes/serial/StartNoArgs 22.91
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
259 TestNetworkPlugins/group/false 5.15
264 TestStartStop/group/old-k8s-version/serial/FirstStart 158.33
266 TestStartStop/group/no-preload/serial/FirstStart 156.21
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.41
269 TestStartStop/group/embed-certs/serial/FirstStart 102.15
270 TestStartStop/group/old-k8s-version/serial/DeployApp 7.5
271 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
273 TestStartStop/group/no-preload/serial/DeployApp 9.4
274 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
276 TestStartStop/group/embed-certs/serial/DeployApp 8.41
277 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.26
280 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.44
282 TestStartStop/group/old-k8s-version/serial/SecondStart 791.69
283 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.45
284 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
287 TestStartStop/group/no-preload/serial/SecondStart 878.94
289 TestStartStop/group/embed-certs/serial/SecondStart 528.29
291 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 506.91
301 TestStartStop/group/newest-cni/serial/FirstStart 66.53
302 TestNetworkPlugins/group/auto/Start 66.42
303 TestStartStop/group/newest-cni/serial/DeployApp 0
304 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.59
305 TestStartStop/group/newest-cni/serial/Stop 10.2
306 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
307 TestStartStop/group/newest-cni/serial/SecondStart 57.2
308 TestNetworkPlugins/group/auto/KubeletFlags 0.22
309 TestNetworkPlugins/group/auto/NetCatPod 11.4
310 TestNetworkPlugins/group/auto/DNS 0.27
311 TestNetworkPlugins/group/auto/Localhost 0.2
312 TestNetworkPlugins/group/auto/HairPin 0.19
313 TestNetworkPlugins/group/kindnet/Start 71.55
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
317 TestStartStop/group/newest-cni/serial/Pause 2.83
318 TestNetworkPlugins/group/calico/Start 105.85
319 TestNetworkPlugins/group/custom-flannel/Start 117.99
320 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
321 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
322 TestNetworkPlugins/group/kindnet/NetCatPod 15.34
323 TestNetworkPlugins/group/kindnet/DNS 0.29
324 TestNetworkPlugins/group/kindnet/Localhost 0.21
325 TestNetworkPlugins/group/kindnet/HairPin 0.19
326 TestNetworkPlugins/group/enable-default-cni/Start 103.24
327 TestNetworkPlugins/group/calico/ControllerPod 5.03
328 TestNetworkPlugins/group/calico/KubeletFlags 0.24
329 TestNetworkPlugins/group/calico/NetCatPod 12.48
330 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
331 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.42
332 TestNetworkPlugins/group/flannel/Start 85.31
333 TestNetworkPlugins/group/calico/DNS 0.28
334 TestNetworkPlugins/group/calico/Localhost 0.31
335 TestNetworkPlugins/group/calico/HairPin 0.24
336 TestNetworkPlugins/group/custom-flannel/DNS 0.28
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
339 TestNetworkPlugins/group/bridge/Start 70.8
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.32
342 TestNetworkPlugins/group/flannel/ControllerPod 5.02
343 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
344 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
345 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
347 TestNetworkPlugins/group/flannel/NetCatPod 13.32
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
349 TestNetworkPlugins/group/bridge/NetCatPod 11.33
350 TestNetworkPlugins/group/flannel/DNS 0.22
351 TestNetworkPlugins/group/bridge/DNS 26.07
352 TestNetworkPlugins/group/flannel/Localhost 0.16
353 TestNetworkPlugins/group/flannel/HairPin 0.19
354 TestNetworkPlugins/group/bridge/Localhost 0.15
355 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (20s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-629575 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-629575 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (19.995086531s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (20.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-629575
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-629575: exit status 85 (77.968738ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-629575 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:01 UTC |          |
	|         | -p download-only-629575        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/30 23:01:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 23:01:37.867668  216016 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:01:37.867969  216016 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:01:37.867980  216016 out.go:309] Setting ErrFile to fd 2...
	I1030 23:01:37.867988  216016 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:01:37.868218  216016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	W1030 23:01:37.868355  216016 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17527-208817/.minikube/config/config.json: open /home/jenkins/minikube-integration/17527-208817/.minikube/config/config.json: no such file or directory
	I1030 23:01:37.869145  216016 out.go:303] Setting JSON to true
	I1030 23:01:37.870137  216016 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24250,"bootTime":1698682648,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:01:37.870207  216016 start.go:138] virtualization: kvm guest
	I1030 23:01:37.873111  216016 out.go:97] [download-only-629575] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:01:37.874988  216016 out.go:169] MINIKUBE_LOCATION=17527
	W1030 23:01:37.873264  216016 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball: no such file or directory
	I1030 23:01:37.873373  216016 notify.go:220] Checking for updates...
	I1030 23:01:37.878158  216016 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:01:37.879869  216016 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:01:37.881465  216016 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:01:37.883016  216016 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1030 23:01:37.886103  216016 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1030 23:01:37.886393  216016 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:01:37.923087  216016 out.go:97] Using the kvm2 driver based on user configuration
	I1030 23:01:37.923121  216016 start.go:298] selected driver: kvm2
	I1030 23:01:37.923128  216016 start.go:902] validating driver "kvm2" against <nil>
	I1030 23:01:37.923500  216016 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:01:37.923612  216016 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17527-208817/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1030 23:01:37.940643  216016 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1030 23:01:37.940711  216016 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1030 23:01:37.941364  216016 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1030 23:01:37.941533  216016 start_flags.go:916] Wait components to verify : map[apiserver:true system_pods:true]
	I1030 23:01:37.941621  216016 cni.go:84] Creating CNI manager for ""
	I1030 23:01:37.941636  216016 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1030 23:01:37.941648  216016 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1030 23:01:37.941658  216016 start_flags.go:323] config:
	{Name:download-only-629575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-629575 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:01:37.941915  216016 iso.go:125] acquiring lock: {Name:mk17c26869b21ec4c3726ac5b4b2fb393d92c043 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1030 23:01:37.944141  216016 out.go:97] Downloading VM boot image ...
	I1030 23:01:37.944192  216016 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/iso/amd64/minikube-v1.32.0-1698684775-17527-amd64.iso
	I1030 23:01:49.599770  216016 out.go:97] Starting control plane node download-only-629575 in cluster download-only-629575
	I1030 23:01:49.599789  216016 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1030 23:01:49.618922  216016 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1030 23:01:49.618963  216016 cache.go:56] Caching tarball of preloaded images
	I1030 23:01:49.619138  216016 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1030 23:01:49.620894  216016 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1030 23:01:49.620912  216016 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1030 23:01:49.649537  216016 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17527-208817/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-629575"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (4.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-629575 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-629575 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.935755126s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (4.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-629575
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-629575: exit status 85 (77.828643ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-629575 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:01 UTC |          |
	|         | -p download-only-629575        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	| start   | -o=json --download-only        | download-only-629575 | jenkins | v1.32.0-beta.0 | 30 Oct 23 23:01 UTC |          |
	|         | -p download-only-629575        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/30 23:01:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1030 23:01:57.939917  216097 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:01:57.940054  216097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:01:57.940063  216097 out.go:309] Setting ErrFile to fd 2...
	I1030 23:01:57.940068  216097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:01:57.940215  216097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	W1030 23:01:57.940313  216097 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17527-208817/.minikube/config/config.json: open /home/jenkins/minikube-integration/17527-208817/.minikube/config/config.json: no such file or directory
	I1030 23:01:57.940707  216097 out.go:303] Setting JSON to true
	I1030 23:01:57.941571  216097 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24270,"bootTime":1698682648,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:01:57.941632  216097 start.go:138] virtualization: kvm guest
	I1030 23:01:57.943940  216097 out.go:97] [download-only-629575] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:01:57.944109  216097 notify.go:220] Checking for updates...
	I1030 23:01:57.945505  216097 out.go:169] MINIKUBE_LOCATION=17527
	I1030 23:01:57.947000  216097 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:01:57.948376  216097 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:01:57.949696  216097 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:01:57.950933  216097 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-629575"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-629575
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-415718 --alsologtostderr --binary-mirror http://127.0.0.1:41209 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-415718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-415718
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (93.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-561110 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-561110 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.168199357s)
helpers_test.go:175: Cleaning up "offline-crio-561110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-561110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-561110: (1.063123008s)
--- PASS: TestOffline (93.23s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-780757
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-780757: exit status 85 (69.392016ms)

                                                
                                                
-- stdout --
	* Profile "addons-780757" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-780757"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-780757
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-780757: exit status 85 (69.952897ms)

                                                
                                                
-- stdout --
	* Profile "addons-780757" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-780757"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (146.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-780757 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-780757 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.661731501s)
--- PASS: TestAddons/Setup (146.66s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 30.790056ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jn9tz" [67828364-5870-444a-96d1-9b020f6fba34] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020794s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hxqz8" [299618eb-6ec6-4599-9c0b-63b3612bdad0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.023518362s
addons_test.go:339: (dbg) Run:  kubectl --context addons-780757 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-780757 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-780757 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.623389662s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.56s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ltrkg" [91e828da-cc98-4a9c-8a25-d5a23b375ec4] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013054247s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-780757
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-780757: (5.854828669s)
--- PASS: TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 6.581514ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-4499m" [22c8ca31-7056-4163-babb-971556cba3e7] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.023857232s
addons_test.go:414: (dbg) Run:  kubectl --context addons-780757 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-780757 addons disable metrics-server --alsologtostderr -v=1: (1.157045685s)
--- PASS: TestAddons/parallel/MetricsServer (6.29s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.047292ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-kbjrj" [00fd1262-2787-4601-9bf1-5be82236edba] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015189777s
addons_test.go:472: (dbg) Run:  kubectl --context addons-780757 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-780757 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.033631013s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 31.40782ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-780757 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/10/30 23:04:45 [DEBUG] GET http://192.168.39.172:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-780757 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a0fb5c9b-5981-4e90-b8de-1656e4abaa7a] Pending
helpers_test.go:344: "task-pv-pod" [a0fb5c9b-5981-4e90-b8de-1656e4abaa7a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a0fb5c9b-5981-4e90-b8de-1656e4abaa7a] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.028790629s
addons_test.go:583: (dbg) Run:  kubectl --context addons-780757 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-780757 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-780757 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-780757 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-780757 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-780757 delete pod task-pv-pod: (1.130596381s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-780757 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-780757 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-780757 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [906640b2-7316-4e1c-ae33-ea5d94eeea4a] Pending
helpers_test.go:344: "task-pv-pod-restore" [906640b2-7316-4e1c-ae33-ea5d94eeea4a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [906640b2-7316-4e1c-ae33-ea5d94eeea4a] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.014873223s
addons_test.go:625: (dbg) Run:  kubectl --context addons-780757 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-780757 delete pod task-pv-pod-restore: (1.341962659s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-780757 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-780757 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-780757 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.821694258s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (73.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-780757 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-780757 --alsologtostderr -v=1: (1.509574285s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-wt7wf" [e87c7130-8952-4faa-8018-6f1bd9b967cf] Pending
helpers_test.go:344: "headlamp-94b766c-wt7wf" [e87c7130-8952-4faa-8018-6f1bd9b967cf] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-wt7wf" [e87c7130-8952-4faa-8018-6f1bd9b967cf] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.037198782s
--- PASS: TestAddons/parallel/Headlamp (15.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-xdrwq" [b757bc9b-c183-4558-b338-89d9234e4469] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.027295897s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-780757
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-780757 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-780757 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-780757 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e27ddda0-c386-4054-a0f2-d6b3e9f623ba] Pending
helpers_test.go:344: "test-local-path" [e27ddda0-c386-4054-a0f2-d6b3e9f623ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e27ddda0-c386-4054-a0f2-d6b3e9f623ba] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e27ddda0-c386-4054-a0f2-d6b3e9f623ba] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.014303459s
addons_test.go:890: (dbg) Run:  kubectl --context addons-780757 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 ssh "cat /opt/local-path-provisioner/pvc-32d74994-96d4-4338-bff6-25a7bc634797_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-780757 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-780757 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-780757 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-780757 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.870385764s)
--- PASS: TestAddons/parallel/LocalPath (54.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w9bkq" [1450dcb6-9793-46c2-9756-3c6d18987e5c] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.041144694s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-780757
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-780757 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-780757 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (81.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-344463 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-344463 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m19.660183594s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-344463 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-344463 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-344463 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-344463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-344463
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-344463: (1.008417934s)
--- PASS: TestCertOptions (81.18s)

                                                
                                    
x
+
TestCertExpiration (286.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-663908 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-663908 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m25.429689578s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-663908 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-663908 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (20.377478668s)
helpers_test.go:175: Cleaning up "cert-expiration-663908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-663908
--- PASS: TestCertExpiration (286.63s)

                                                
                                    
x
+
TestForceSystemdFlag (81.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-768768 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-768768 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.163005688s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-768768 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-768768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-768768
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-768768: (1.028187682s)
--- PASS: TestForceSystemdFlag (81.44s)

                                                
                                    
x
+
TestForceSystemdEnv (68.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-781077 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-781077 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.42521802s)
helpers_test.go:175: Cleaning up "force-systemd-env-781077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-781077
--- PASS: TestForceSystemdEnv (68.27s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.27s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.27s)

                                                
                                    
x
+
TestErrorSpam/setup (45.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-527815 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-527815 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-527815 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-527815 --driver=kvm2  --container-runtime=crio: (45.667297142s)
--- PASS: TestErrorSpam/setup (45.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 stop: (2.093967091s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527815 --log_dir /tmp/nospam-527815 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17527-208817/.minikube/files/etc/test/nested/copy/216005/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167609 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-167609 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m0.07221767s)
--- PASS: TestFunctional/serial/StartWithProxy (60.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (56.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167609 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-167609 --alsologtostderr -v=8: (56.376603767s)
functional_test.go:659: soft start took 56.37723374s for "functional-167609" cluster.
--- PASS: TestFunctional/serial/SoftStart (56.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-167609 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 cache add registry.k8s.io/pause:3.1: (1.14465272s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 cache add registry.k8s.io/pause:3.3: (1.113588902s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 cache add registry.k8s.io/pause:latest: (1.16690535s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-167609 /tmp/TestFunctionalserialCacheCmdcacheadd_local2769133657/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 cache add minikube-local-cache-test:functional-167609
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 cache delete minikube-local-cache-test:functional-167609
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-167609
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (243.771437ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 kubectl -- --context functional-167609 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-167609 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167609 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-167609 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.194680793s)
functional_test.go:757: restart took 38.194825441s for "functional-167609" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-167609 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 logs: (1.54436957s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 logs --file /tmp/TestFunctionalserialLogsFileCmd3746602060/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 logs --file /tmp/TestFunctionalserialLogsFileCmd3746602060/001/logs.txt: (1.544395896s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-167609 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-167609
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-167609: exit status 115 (324.204849ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.211:30777 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-167609 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 config get cpus: exit status 14 (68.352942ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 config get cpus: exit status 14 (64.960906ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-167609 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-167609 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 223264: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167609 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-167609 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.472984ms)

                                                
                                                
-- stdout --
	* [functional-167609] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 23:14:32.090252  222944 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:14:32.090392  222944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:14:32.090406  222944 out.go:309] Setting ErrFile to fd 2...
	I1030 23:14:32.090413  222944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:14:32.090597  222944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:14:32.091143  222944 out.go:303] Setting JSON to false
	I1030 23:14:32.092171  222944 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25024,"bootTime":1698682648,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:14:32.092236  222944 start.go:138] virtualization: kvm guest
	I1030 23:14:32.094482  222944 out.go:177] * [functional-167609] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1030 23:14:32.096133  222944 out.go:177]   - MINIKUBE_LOCATION=17527
	I1030 23:14:32.096210  222944 notify.go:220] Checking for updates...
	I1030 23:14:32.097731  222944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:14:32.099186  222944 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:14:32.100527  222944 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:14:32.101889  222944 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 23:14:32.103239  222944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 23:14:32.105209  222944 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:14:32.105815  222944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:14:32.105881  222944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:14:32.120972  222944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33345
	I1030 23:14:32.121341  222944 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:14:32.121934  222944 main.go:141] libmachine: Using API Version  1
	I1030 23:14:32.121965  222944 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:14:32.122297  222944 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:14:32.122507  222944 main.go:141] libmachine: (functional-167609) Calling .DriverName
	I1030 23:14:32.122738  222944 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:14:32.123037  222944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:14:32.123087  222944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:14:32.137064  222944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I1030 23:14:32.137449  222944 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:14:32.137883  222944 main.go:141] libmachine: Using API Version  1
	I1030 23:14:32.137903  222944 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:14:32.138185  222944 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:14:32.138369  222944 main.go:141] libmachine: (functional-167609) Calling .DriverName
	I1030 23:14:32.171211  222944 out.go:177] * Using the kvm2 driver based on existing profile
	I1030 23:14:32.172609  222944 start.go:298] selected driver: kvm2
	I1030 23:14:32.172634  222944 start.go:902] validating driver "kvm2" against &{Name:functional-167609 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-167609 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.211 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:14:32.172819  222944 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 23:14:32.175457  222944 out.go:177] 
	W1030 23:14:32.176816  222944 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1030 23:14:32.178208  222944 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167609 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167609 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-167609 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (172.365555ms)

                                                
                                                
-- stdout --
	* [functional-167609] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 23:14:32.549263  223049 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:14:32.549434  223049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:14:32.549444  223049 out.go:309] Setting ErrFile to fd 2...
	I1030 23:14:32.549449  223049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:14:32.549733  223049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:14:32.550411  223049 out.go:303] Setting JSON to false
	I1030 23:14:32.551530  223049 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25025,"bootTime":1698682648,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1030 23:14:32.551594  223049 start.go:138] virtualization: kvm guest
	I1030 23:14:32.553751  223049 out.go:177] * [functional-167609] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I1030 23:14:32.555576  223049 out.go:177]   - MINIKUBE_LOCATION=17527
	I1030 23:14:32.555586  223049 notify.go:220] Checking for updates...
	I1030 23:14:32.557082  223049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1030 23:14:32.558745  223049 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1030 23:14:32.560316  223049 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1030 23:14:32.561716  223049 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1030 23:14:32.563123  223049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1030 23:14:32.565078  223049 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:14:32.565746  223049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:14:32.565821  223049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:14:32.580932  223049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1030 23:14:32.581407  223049 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:14:32.582011  223049 main.go:141] libmachine: Using API Version  1
	I1030 23:14:32.582039  223049 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:14:32.582434  223049 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:14:32.582635  223049 main.go:141] libmachine: (functional-167609) Calling .DriverName
	I1030 23:14:32.582897  223049 driver.go:378] Setting default libvirt URI to qemu:///system
	I1030 23:14:32.583316  223049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:14:32.583367  223049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:14:32.599234  223049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I1030 23:14:32.599701  223049 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:14:32.600169  223049 main.go:141] libmachine: Using API Version  1
	I1030 23:14:32.600190  223049 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:14:32.600513  223049 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:14:32.600711  223049 main.go:141] libmachine: (functional-167609) Calling .DriverName
	I1030 23:14:32.642584  223049 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1030 23:14:32.644022  223049 start.go:298] selected driver: kvm2
	I1030 23:14:32.644044  223049 start.go:902] validating driver "kvm2" against &{Name:functional-167609 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17527/minikube-v1.32.0-1698684775-17527-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698660445-17527@sha256:34cb83e9cb3f0fe3ce8dcb727a873b33aee680fdd682fbcb5c46db345e9f67df Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-167609 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.211 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1030 23:14:32.644214  223049 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1030 23:14:32.646715  223049 out.go:177] 
	W1030 23:14:32.648072  223049 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1030 23:14:32.649379  223049 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 status -o json
E1030 23:14:30.631360  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:14:30.637299  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:14:30.647641  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:14:30.667982  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:14:30.708339  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:14:30.788707  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-167609 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-167609 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-nlgjr" [4a630564-995b-4d4b-bd60-c761fef508e0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-nlgjr" [4a630564-995b-4d4b-bd60-c761fef508e0] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.016781121s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.211:32587
functional_test.go:1674: http://192.168.50.211:32587: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-nlgjr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.211:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.211:32587
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b0f034ff-34d1-4cc4-ac33-839b7c3021b4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015775912s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-167609 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-167609 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-167609 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-167609 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-167609 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [805fa34b-c29c-444c-92b2-faa9113c904b] Pending
helpers_test.go:344: "sp-pod" [805fa34b-c29c-444c-92b2-faa9113c904b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [805fa34b-c29c-444c-92b2-faa9113c904b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.039517773s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-167609 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-167609 delete -f testdata/storage-provisioner/pod.yaml
E1030 23:14:35.753784  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-167609 delete -f testdata/storage-provisioner/pod.yaml: (1.408980194s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-167609 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3bbc1553-cc9a-48ba-87b4-4d8b7529308a] Pending
helpers_test.go:344: "sp-pod" [3bbc1553-cc9a-48ba-87b4-4d8b7529308a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3bbc1553-cc9a-48ba-87b4-4d8b7529308a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.075174345s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-167609 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh -n functional-167609 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 cp functional-167609:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3459365935/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh -n functional-167609 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-167609 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-rqgq6" [ed195055-ddd5-439f-91f5-d2cd3ccc3902] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-rqgq6" [ed195055-ddd5-439f-91f5-d2cd3ccc3902] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.037878837s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-167609 exec mysql-859648c796-rqgq6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-167609 exec mysql-859648c796-rqgq6 -- mysql -ppassword -e "show databases;": exit status 1 (164.456545ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-167609 exec mysql-859648c796-rqgq6 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-167609 exec mysql-859648c796-rqgq6 -- mysql -ppassword -e "show databases;": exit status 1 (141.855466ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-167609 exec mysql-859648c796-rqgq6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/216005/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo cat /etc/test/nested/copy/216005/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/216005.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo cat /etc/ssl/certs/216005.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/216005.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo cat /usr/share/ca-certificates/216005.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2160052.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo cat /etc/ssl/certs/2160052.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2160052.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo cat /usr/share/ca-certificates/2160052.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-167609 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 ssh "sudo systemctl is-active docker": exit status 1 (271.133543ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 ssh "sudo systemctl is-active containerd": exit status 1 (253.719909ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167609 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-167609
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-167609
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167609 image ls --format short --alsologtostderr:
I1030 23:14:46.182013  224056 out.go:296] Setting OutFile to fd 1 ...
I1030 23:14:46.182122  224056 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:46.182131  224056 out.go:309] Setting ErrFile to fd 2...
I1030 23:14:46.182136  224056 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:46.182342  224056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
I1030 23:14:46.182945  224056 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:46.183042  224056 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:46.183393  224056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:46.183445  224056 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:46.201410  224056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
I1030 23:14:46.201933  224056 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:46.202551  224056 main.go:141] libmachine: Using API Version  1
I1030 23:14:46.202588  224056 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:46.202961  224056 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:46.203182  224056 main.go:141] libmachine: (functional-167609) Calling .GetState
I1030 23:14:46.205165  224056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:46.205206  224056 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:46.220650  224056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
I1030 23:14:46.221135  224056 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:46.221624  224056 main.go:141] libmachine: Using API Version  1
I1030 23:14:46.221646  224056 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:46.222017  224056 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:46.222221  224056 main.go:141] libmachine: (functional-167609) Calling .DriverName
I1030 23:14:46.222466  224056 ssh_runner.go:195] Run: systemctl --version
I1030 23:14:46.222491  224056 main.go:141] libmachine: (functional-167609) Calling .GetSSHHostname
I1030 23:14:46.225452  224056 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:46.225903  224056 main.go:141] libmachine: (functional-167609) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:26:d8", ip: ""} in network mk-functional-167609: {Iface:virbr1 ExpiryTime:2023-10-31 00:11:40 +0000 UTC Type:0 Mac:52:54:00:ab:26:d8 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:functional-167609 Clientid:01:52:54:00:ab:26:d8}
I1030 23:14:46.225967  224056 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined IP address 192.168.50.211 and MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:46.226141  224056 main.go:141] libmachine: (functional-167609) Calling .GetSSHPort
I1030 23:14:46.226332  224056 main.go:141] libmachine: (functional-167609) Calling .GetSSHKeyPath
I1030 23:14:46.226498  224056 main.go:141] libmachine: (functional-167609) Calling .GetSSHUsername
I1030 23:14:46.226624  224056 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/functional-167609/id_rsa Username:docker}
I1030 23:14:46.339250  224056 ssh_runner.go:195] Run: sudo crictl images --output json
I1030 23:14:46.422059  224056 main.go:141] libmachine: Making call to close driver server
I1030 23:14:46.422075  224056 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:46.422394  224056 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:46.422419  224056 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 23:14:46.422437  224056 main.go:141] libmachine: Making call to close driver server
I1030 23:14:46.422448  224056 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:46.422701  224056 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:46.422717  224056 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167609 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5d750b18809b5168c32d73c123c684d85e2914c1867d1035aba348eb2ae077db
repoDigests:
- localhost/minikube-local-cache-test@sha256:0c8169d70f165bbaaa3f57b13c9be3ced72f3e75bfa5afdd3cdecb94fbb06dff
repoTags:
- localhost/minikube-local-cache-test:functional-167609
size: "3345"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0
repoDigests:
- docker.io/library/nginx@sha256:0d60ba9498d4491525334696a736b4c19b56231b972061fab2f536d48ebfd7ce
- docker.io/library/nginx@sha256:add4792d930c25dd2abf2ef9ea79de578097a1c175a16ab25814332fe33622de
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-167609
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167609 image ls --format yaml --alsologtostderr:
I1030 23:14:46.487596  224080 out.go:296] Setting OutFile to fd 1 ...
I1030 23:14:46.487768  224080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:46.487783  224080 out.go:309] Setting ErrFile to fd 2...
I1030 23:14:46.487790  224080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1030 23:14:46.488003  224080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
I1030 23:14:46.488664  224080 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:46.488787  224080 config.go:182] Loaded profile config "functional-167609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1030 23:14:46.489318  224080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:46.489381  224080 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:46.507164  224080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
I1030 23:14:46.507743  224080 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:46.508458  224080 main.go:141] libmachine: Using API Version  1
I1030 23:14:46.508482  224080 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:46.508865  224080 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:46.509101  224080 main.go:141] libmachine: (functional-167609) Calling .GetState
I1030 23:14:46.511301  224080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1030 23:14:46.511363  224080 main.go:141] libmachine: Launching plugin server for driver kvm2
I1030 23:14:46.527191  224080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
I1030 23:14:46.527626  224080 main.go:141] libmachine: () Calling .GetVersion
I1030 23:14:46.528221  224080 main.go:141] libmachine: Using API Version  1
I1030 23:14:46.528244  224080 main.go:141] libmachine: () Calling .SetConfigRaw
I1030 23:14:46.528627  224080 main.go:141] libmachine: () Calling .GetMachineName
I1030 23:14:46.528953  224080 main.go:141] libmachine: (functional-167609) Calling .DriverName
I1030 23:14:46.529196  224080 ssh_runner.go:195] Run: systemctl --version
I1030 23:14:46.529230  224080 main.go:141] libmachine: (functional-167609) Calling .GetSSHHostname
I1030 23:14:46.532639  224080 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:46.533096  224080 main.go:141] libmachine: (functional-167609) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:26:d8", ip: ""} in network mk-functional-167609: {Iface:virbr1 ExpiryTime:2023-10-31 00:11:40 +0000 UTC Type:0 Mac:52:54:00:ab:26:d8 Iaid: IPaddr:192.168.50.211 Prefix:24 Hostname:functional-167609 Clientid:01:52:54:00:ab:26:d8}
I1030 23:14:46.533132  224080 main.go:141] libmachine: (functional-167609) DBG | domain functional-167609 has defined IP address 192.168.50.211 and MAC address 52:54:00:ab:26:d8 in network mk-functional-167609
I1030 23:14:46.533388  224080 main.go:141] libmachine: (functional-167609) Calling .GetSSHPort
I1030 23:14:46.533590  224080 main.go:141] libmachine: (functional-167609) Calling .GetSSHKeyPath
I1030 23:14:46.533768  224080 main.go:141] libmachine: (functional-167609) Calling .GetSSHUsername
I1030 23:14:46.534041  224080 sshutil.go:53] new ssh client: &{IP:192.168.50.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/functional-167609/id_rsa Username:docker}
I1030 23:14:46.690983  224080 ssh_runner.go:195] Run: sudo crictl images --output json
I1030 23:14:46.754220  224080 main.go:141] libmachine: Making call to close driver server
I1030 23:14:46.754242  224080 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:46.754537  224080 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:46.754561  224080 main.go:141] libmachine: Making call to close connection to plugin binary
I1030 23:14:46.754582  224080 main.go:141] libmachine: Making call to close driver server
I1030 23:14:46.754595  224080 main.go:141] libmachine: (functional-167609) Calling .Close
I1030 23:14:46.754812  224080 main.go:141] libmachine: Successfully made call to close driver server
I1030 23:14:46.754827  224080 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-167609
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-167609 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-167609 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rz9ph" [52af736d-b0da-4a04-9fd2-bb96e1636f3b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-rz9ph" [52af736d-b0da-4a04-9fd2-bb96e1636f3b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.027191364s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image load --daemon gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image load --daemon gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr: (6.449513322s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image load --daemon gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image load --daemon gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr: (2.474346534s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-167609
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image load --daemon gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image load --daemon gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr: (5.046766103s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls
E1030 23:14:30.949285  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 service list -o json
functional_test.go:1493: Took "374.309262ms" to run "out/minikube-linux-amd64 -p functional-167609 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.211:30987
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.211:30987
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image save gcr.io/google-containers/addon-resizer:functional-167609 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E1030 23:14:31.270425  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image save gcr.io/google-containers/addon-resizer:functional-167609 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.736952901s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "277.622916ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "71.752885ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdany-port3092725940/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698707671527234932" to /tmp/TestFunctionalparallelMountCmdany-port3092725940/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698707671527234932" to /tmp/TestFunctionalparallelMountCmdany-port3092725940/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698707671527234932" to /tmp/TestFunctionalparallelMountCmdany-port3092725940/001/test-1698707671527234932
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.1651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E1030 23:14:31.911660  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 30 23:14 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 30 23:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 30 23:14 test-1698707671527234932
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh cat /mount-9p/test-1698707671527234932
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-167609 replace --force -f testdata/busybox-mount-test.yaml
E1030 23:14:33.192660  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6e6a49d3-0c50-45f7-b5c3-cd00314a9b4a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6e6a49d3-0c50-45f7-b5c3-cd00314a9b4a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6e6a49d3-0c50-45f7-b5c3-cd00314a9b4a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.056653432s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-167609 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh stat /mount-9p/created-by-pod
E1030 23:14:40.874139  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdany-port3092725940/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "341.204593ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "63.099964ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image rm gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.684381711s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-167609
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 image save --daemon gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-167609 image save --daemon gcr.io/google-containers/addon-resizer:functional-167609 --alsologtostderr: (2.15099764s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-167609
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdspecific-port787789362/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.391274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdspecific-port787789362/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 ssh "sudo umount -f /mount-9p": exit status 1 (247.232477ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-167609 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdspecific-port787789362/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632199463/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632199463/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632199463/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T" /mount1: exit status 1 (334.392076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-167609 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-167609 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632199463/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632199463/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167609 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632199463/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-167609
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-167609
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-167609
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (103.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-371910 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1030 23:15:11.595490  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:15:52.556026  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-371910 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m43.782096553s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (103.78s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-371910 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-371910 addons enable ingress --alsologtostderr -v=5: (13.020705685s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-371910 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-443599 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1030 23:19:58.317344  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:20:36.507271  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-443599 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.013937969s)
--- PASS: TestJSONOutput/start/Command (98.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-443599 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-443599 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-443599 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-443599 --output=json --user=testUser: (7.110748572s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-998042 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-998042 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.01486ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d395546-5675-4346-adf2-04232091dfbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-998042] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a83b23f-5a88-4e67-b89e-cb1b38978bad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17527"}}
	{"specversion":"1.0","id":"6c1ccad5-9604-4cfa-9bb9-4d93fbaac40c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c3d44eb2-bb50-4639-b478-522a5ba46bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig"}}
	{"specversion":"1.0","id":"6dd0bb2a-480c-4e8f-965a-405061e10eb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube"}}
	{"specversion":"1.0","id":"ecfd95f7-71a5-4fec-830b-5510740abdf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"95cb8a7c-8c96-4c92-b19c-bd905a05f757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ff2e3fbd-0dd6-47a2-99e8-00435beb1c45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-998042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-998042
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (104.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-426657 --driver=kvm2  --container-runtime=crio
E1030 23:21:58.427859  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:22:08.185560  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:08.190840  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:08.201074  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:08.221356  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:08.261654  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:08.342018  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:08.502458  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:08.823062  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:09.464009  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:10.744585  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:13.305549  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:18.426656  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:22:28.666886  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-426657 --driver=kvm2  --container-runtime=crio: (52.216873803s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-429080 --driver=kvm2  --container-runtime=crio
E1030 23:22:49.147154  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-429080 --driver=kvm2  --container-runtime=crio: (49.81520529s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-426657
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-429080
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-429080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-429080
helpers_test.go:175: Cleaning up "first-426657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-426657
--- PASS: TestMinikubeProfile (104.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-315410 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1030 23:23:30.108711  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-315410 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.459602431s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-315410 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-315410 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-330887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1030 23:24:14.584451  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-330887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.048236577s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-330887 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-330887 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-315410 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-330887 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-330887 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-330887
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-330887: (1.25873448s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-330887
E1030 23:24:30.632407  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:24:42.269095  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-330887: (20.866095211s)
--- PASS: TestMountStart/serial/RestartStopped (21.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-330887 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-330887 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370491 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1030 23:24:52.029384  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-370491 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.678831064s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-370491 -- rollout status deployment/busybox: (2.088093552s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-4t8fk -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-7hhs5 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-4t8fk -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-7hhs5 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-4t8fk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370491 -- exec busybox-5bc68d56bd-7hhs5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-370491 -v 3 --alsologtostderr
E1030 23:27:08.185449  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-370491 -v 3 --alsologtostderr: (40.136068801s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.74s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp testdata/cp-test.txt multinode-370491:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile715226696/001/cp-test_multinode-370491.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491:/home/docker/cp-test.txt multinode-370491-m02:/home/docker/cp-test_multinode-370491_multinode-370491-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m02 "sudo cat /home/docker/cp-test_multinode-370491_multinode-370491-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491:/home/docker/cp-test.txt multinode-370491-m03:/home/docker/cp-test_multinode-370491_multinode-370491-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m03 "sudo cat /home/docker/cp-test_multinode-370491_multinode-370491-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp testdata/cp-test.txt multinode-370491-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile715226696/001/cp-test_multinode-370491-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491-m02:/home/docker/cp-test.txt multinode-370491:/home/docker/cp-test_multinode-370491-m02_multinode-370491.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491 "sudo cat /home/docker/cp-test_multinode-370491-m02_multinode-370491.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491-m02:/home/docker/cp-test.txt multinode-370491-m03:/home/docker/cp-test_multinode-370491-m02_multinode-370491-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m03 "sudo cat /home/docker/cp-test_multinode-370491-m02_multinode-370491-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp testdata/cp-test.txt multinode-370491-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile715226696/001/cp-test_multinode-370491-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491-m03:/home/docker/cp-test.txt multinode-370491:/home/docker/cp-test_multinode-370491-m03_multinode-370491.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491 "sudo cat /home/docker/cp-test_multinode-370491-m03_multinode-370491.txt"
E1030 23:27:35.870563  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 cp multinode-370491-m03:/home/docker/cp-test.txt multinode-370491-m02:/home/docker/cp-test_multinode-370491-m03_multinode-370491-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 ssh -n multinode-370491-m02 "sudo cat /home/docker/cp-test_multinode-370491-m03_multinode-370491-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-370491 node stop m03: (2.098746807s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-370491 status: exit status 7 (445.755036ms)

                                                
                                                
-- stdout --
	multinode-370491
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-370491-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-370491-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-370491 status --alsologtostderr: exit status 7 (454.258688ms)

                                                
                                                
-- stdout --
	multinode-370491
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-370491-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-370491-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1030 23:27:39.285353  231589 out.go:296] Setting OutFile to fd 1 ...
	I1030 23:27:39.285582  231589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:27:39.285590  231589 out.go:309] Setting ErrFile to fd 2...
	I1030 23:27:39.285595  231589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1030 23:27:39.285771  231589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1030 23:27:39.285935  231589 out.go:303] Setting JSON to false
	I1030 23:27:39.285971  231589 mustload.go:65] Loading cluster: multinode-370491
	I1030 23:27:39.286102  231589 notify.go:220] Checking for updates...
	I1030 23:27:39.286338  231589 config.go:182] Loaded profile config "multinode-370491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1030 23:27:39.286351  231589 status.go:255] checking status of multinode-370491 ...
	I1030 23:27:39.286735  231589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:27:39.286801  231589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:27:39.301586  231589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I1030 23:27:39.302061  231589 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:27:39.302614  231589 main.go:141] libmachine: Using API Version  1
	I1030 23:27:39.302638  231589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:27:39.303062  231589 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:27:39.303248  231589 main.go:141] libmachine: (multinode-370491) Calling .GetState
	I1030 23:27:39.304911  231589 status.go:330] multinode-370491 host status = "Running" (err=<nil>)
	I1030 23:27:39.304928  231589 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:27:39.305250  231589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:27:39.305291  231589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:27:39.319796  231589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I1030 23:27:39.320209  231589 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:27:39.320650  231589 main.go:141] libmachine: Using API Version  1
	I1030 23:27:39.320678  231589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:27:39.321028  231589 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:27:39.321204  231589 main.go:141] libmachine: (multinode-370491) Calling .GetIP
	I1030 23:27:39.323939  231589 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:27:39.324369  231589 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:27:39.324403  231589 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:27:39.324558  231589 host.go:66] Checking if "multinode-370491" exists ...
	I1030 23:27:39.324859  231589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:27:39.324895  231589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:27:39.339620  231589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I1030 23:27:39.340035  231589 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:27:39.340550  231589 main.go:141] libmachine: Using API Version  1
	I1030 23:27:39.340579  231589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:27:39.340871  231589 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:27:39.341093  231589 main.go:141] libmachine: (multinode-370491) Calling .DriverName
	I1030 23:27:39.341297  231589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1030 23:27:39.341345  231589 main.go:141] libmachine: (multinode-370491) Calling .GetSSHHostname
	I1030 23:27:39.343859  231589 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:27:39.344301  231589 main.go:141] libmachine: (multinode-370491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:7c:a3", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:25:07 +0000 UTC Type:0 Mac:52:54:00:40:7c:a3 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-370491 Clientid:01:52:54:00:40:7c:a3}
	I1030 23:27:39.344333  231589 main.go:141] libmachine: (multinode-370491) DBG | domain multinode-370491 has defined IP address 192.168.39.231 and MAC address 52:54:00:40:7c:a3 in network mk-multinode-370491
	I1030 23:27:39.344426  231589 main.go:141] libmachine: (multinode-370491) Calling .GetSSHPort
	I1030 23:27:39.344606  231589 main.go:141] libmachine: (multinode-370491) Calling .GetSSHKeyPath
	I1030 23:27:39.344748  231589 main.go:141] libmachine: (multinode-370491) Calling .GetSSHUsername
	I1030 23:27:39.344902  231589 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491/id_rsa Username:docker}
	I1030 23:27:39.438941  231589 ssh_runner.go:195] Run: systemctl --version
	I1030 23:27:39.444959  231589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:27:39.459120  231589 kubeconfig.go:92] found "multinode-370491" server: "https://192.168.39.231:8443"
	I1030 23:27:39.459151  231589 api_server.go:166] Checking apiserver status ...
	I1030 23:27:39.459185  231589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1030 23:27:39.472533  231589 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1063/cgroup
	I1030 23:27:39.480957  231589 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod377aac2edfa5973c73516a60b3dd1cd5/crio-724b0a6b7a4a53994e7cc49beec2a61445fcd4c11b7aaf7be3c3aacedbe2a47b"
	I1030 23:27:39.481032  231589 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod377aac2edfa5973c73516a60b3dd1cd5/crio-724b0a6b7a4a53994e7cc49beec2a61445fcd4c11b7aaf7be3c3aacedbe2a47b/freezer.state
	I1030 23:27:39.490672  231589 api_server.go:204] freezer state: "THAWED"
	I1030 23:27:39.490702  231589 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1030 23:27:39.495968  231589 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I1030 23:27:39.495991  231589 status.go:421] multinode-370491 apiserver status = Running (err=<nil>)
	I1030 23:27:39.496001  231589 status.go:257] multinode-370491 status: &{Name:multinode-370491 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1030 23:27:39.496017  231589 status.go:255] checking status of multinode-370491-m02 ...
	I1030 23:27:39.496306  231589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:27:39.496341  231589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:27:39.513237  231589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I1030 23:27:39.513690  231589 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:27:39.514161  231589 main.go:141] libmachine: Using API Version  1
	I1030 23:27:39.514188  231589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:27:39.514534  231589 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:27:39.514726  231589 main.go:141] libmachine: (multinode-370491-m02) Calling .GetState
	I1030 23:27:39.516320  231589 status.go:330] multinode-370491-m02 host status = "Running" (err=<nil>)
	I1030 23:27:39.516339  231589 host.go:66] Checking if "multinode-370491-m02" exists ...
	I1030 23:27:39.516695  231589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:27:39.516739  231589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:27:39.531723  231589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I1030 23:27:39.532195  231589 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:27:39.532674  231589 main.go:141] libmachine: Using API Version  1
	I1030 23:27:39.532703  231589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:27:39.533096  231589 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:27:39.533296  231589 main.go:141] libmachine: (multinode-370491-m02) Calling .GetIP
	I1030 23:27:39.535688  231589 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:27:39.536121  231589 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:27:39.536204  231589 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:27:39.536276  231589 host.go:66] Checking if "multinode-370491-m02" exists ...
	I1030 23:27:39.536579  231589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:27:39.536615  231589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:27:39.551285  231589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45131
	I1030 23:27:39.551711  231589 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:27:39.552206  231589 main.go:141] libmachine: Using API Version  1
	I1030 23:27:39.552230  231589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:27:39.552540  231589 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:27:39.552756  231589 main.go:141] libmachine: (multinode-370491-m02) Calling .DriverName
	I1030 23:27:39.553075  231589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1030 23:27:39.553098  231589 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHHostname
	I1030 23:27:39.555835  231589 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:27:39.556328  231589 main.go:141] libmachine: (multinode-370491-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:1d:9c", ip: ""} in network mk-multinode-370491: {Iface:virbr1 ExpiryTime:2023-10-31 00:26:13 +0000 UTC Type:0 Mac:52:54:00:a1:1d:9c Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:multinode-370491-m02 Clientid:01:52:54:00:a1:1d:9c}
	I1030 23:27:39.556363  231589 main.go:141] libmachine: (multinode-370491-m02) DBG | domain multinode-370491-m02 has defined IP address 192.168.39.85 and MAC address 52:54:00:a1:1d:9c in network mk-multinode-370491
	I1030 23:27:39.556553  231589 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHPort
	I1030 23:27:39.556733  231589 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHKeyPath
	I1030 23:27:39.556916  231589 main.go:141] libmachine: (multinode-370491-m02) Calling .GetSSHUsername
	I1030 23:27:39.557096  231589 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17527-208817/.minikube/machines/multinode-370491-m02/id_rsa Username:docker}
	I1030 23:27:39.648443  231589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1030 23:27:39.661353  231589 status.go:257] multinode-370491-m02 status: &{Name:multinode-370491-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1030 23:27:39.661389  231589 status.go:255] checking status of multinode-370491-m03 ...
	I1030 23:27:39.661692  231589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1030 23:27:39.661728  231589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1030 23:27:39.677682  231589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I1030 23:27:39.678065  231589 main.go:141] libmachine: () Calling .GetVersion
	I1030 23:27:39.678540  231589 main.go:141] libmachine: Using API Version  1
	I1030 23:27:39.678571  231589 main.go:141] libmachine: () Calling .SetConfigRaw
	I1030 23:27:39.678924  231589 main.go:141] libmachine: () Calling .GetMachineName
	I1030 23:27:39.679120  231589 main.go:141] libmachine: (multinode-370491-m03) Calling .GetState
	I1030 23:27:39.680481  231589 status.go:330] multinode-370491-m03 host status = "Stopped" (err=<nil>)
	I1030 23:27:39.680499  231589 status.go:343] host is not running, skipping remaining checks
	I1030 23:27:39.680504  231589 status.go:257] multinode-370491-m03 status: &{Name:multinode-370491-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-370491 node start m03 --alsologtostderr: (28.049458599s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-370491 node delete m03: (1.2625439s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370491 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1030 23:42:08.184775  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:44:14.583572  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1030 23:44:30.632731  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:47:08.185430  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1030 23:47:33.678253  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1030 23:49:14.584536  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-370491 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.696736251s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370491 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-370491
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370491-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-370491-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (79.962465ms)

                                                
                                                
-- stdout --
	* [multinode-370491-m02] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-370491-m02' is duplicated with machine name 'multinode-370491-m02' in profile 'multinode-370491'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370491-m03 --driver=kvm2  --container-runtime=crio
E1030 23:49:30.632026  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-370491-m03 --driver=kvm2  --container-runtime=crio: (47.824478772s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-370491
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-370491: exit status 80 (249.027445ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-370491
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-370491-m03 already exists in multinode-370491-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-370491-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-370491-m03: (1.020831161s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.24s)

                                                
                                    
x
+
TestScheduledStopUnix (116.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-885447 --memory=2048 --driver=kvm2  --container-runtime=crio
E1030 23:55:11.232079  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-885447 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.934264515s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-885447 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-885447 -n scheduled-stop-885447
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-885447 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-885447 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-885447 -n scheduled-stop-885447
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-885447
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-885447 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-885447
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-885447: exit status 7 (81.967842ms)

                                                
                                                
-- stdout --
	scheduled-stop-885447
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-885447 -n scheduled-stop-885447
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-885447 -n scheduled-stop-885447: exit status 7 (76.739157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-885447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-885447
--- PASS: TestScheduledStopUnix (116.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (184.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-610124 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-610124 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.503244862s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-610124
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-610124: (7.155656686s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-610124 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-610124 status --format={{.Host}}: exit status 7 (86.564272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-610124 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-610124 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.848625513s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-610124 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-610124 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-610124 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (128.729951ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-610124] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-610124
	    minikube start -p kubernetes-upgrade-610124 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6101242 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-610124 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-610124 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1030 23:59:14.584019  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-610124 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.052233512s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-610124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-610124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-610124: (1.347180095s)
--- PASS: TestKubernetesUpgrade (184.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570131 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-570131 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (104.772502ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-570131] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (106.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570131 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-570131 --driver=kvm2  --container-runtime=crio: (1m45.81494668s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-570131 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (106.15s)

                                                
                                    
x
+
TestPause/serial/Start (77.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-511532 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-511532 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m17.679901898s)
--- PASS: TestPause/serial/Start (77.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570131 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-570131 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.163330143s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-570131 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-570131 status -o json: exit status 2 (235.437298ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-570131","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-570131
E1030 23:59:30.631044  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-570131: (1.050095709s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570131 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-570131 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.156413696s)
--- PASS: TestNoKubernetes/serial/Start (31.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-570131 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-570131 "sudo systemctl is-active --quiet service kubelet": exit status 1 (253.886417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.581012473s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-570131
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-570131: (2.304307s)
--- PASS: TestNoKubernetes/serial/Stop (2.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570131 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-570131 --driver=kvm2  --container-runtime=crio: (22.91139738s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-570131 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-570131 "sudo systemctl is-active --quiet service kubelet": exit status 1 (354.725411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-740627 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-740627 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (143.526583ms)

                                                
                                                
-- stdout --
	* [false-740627] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17527
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 00:00:36.470209  242717 out.go:296] Setting OutFile to fd 1 ...
	I1031 00:00:36.470389  242717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:00:36.470409  242717 out.go:309] Setting ErrFile to fd 2...
	I1031 00:00:36.470417  242717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 00:00:36.470691  242717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17527-208817/.minikube/bin
	I1031 00:00:36.471525  242717 out.go:303] Setting JSON to false
	I1031 00:00:36.472568  242717 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27788,"bootTime":1698682648,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 00:00:36.472655  242717 start.go:138] virtualization: kvm guest
	I1031 00:00:36.475243  242717 out.go:177] * [false-740627] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 00:00:36.476915  242717 out.go:177]   - MINIKUBE_LOCATION=17527
	I1031 00:00:36.476978  242717 notify.go:220] Checking for updates...
	I1031 00:00:36.478551  242717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 00:00:36.480049  242717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17527-208817/kubeconfig
	I1031 00:00:36.481768  242717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17527-208817/.minikube
	I1031 00:00:36.483221  242717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 00:00:36.484824  242717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 00:00:36.487189  242717 config.go:182] Loaded profile config "force-systemd-flag-768768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:36.487390  242717 config.go:182] Loaded profile config "pause-511532": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1031 00:00:36.487469  242717 config.go:182] Loaded profile config "stopped-upgrade-237143": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1031 00:00:36.487601  242717 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 00:00:36.527895  242717 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 00:00:36.529858  242717 start.go:298] selected driver: kvm2
	I1031 00:00:36.529877  242717 start.go:902] validating driver "kvm2" against <nil>
	I1031 00:00:36.529894  242717 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 00:00:36.532278  242717 out.go:177] 
	W1031 00:00:36.534118  242717 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1031 00:00:36.535999  242717 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-740627 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-740627" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Oct 2023 23:59:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.61.111:8443
name: pause-511532
contexts:
- context:
cluster: pause-511532
extensions:
- extension:
last-update: Mon, 30 Oct 2023 23:59:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: context_info
namespace: default
user: pause-511532
name: pause-511532
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-511532
user:
client-certificate: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.crt
client-key: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-740627

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-740627"

                                                
                                                
----------------------- debugLogs end: false-740627 [took: 4.656176343s] --------------------------------
helpers_test.go:175: Cleaning up "false-740627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-740627
--- PASS: TestNetworkPlugins/group/false (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (158.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-225140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1031 00:02:08.184917  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-225140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m38.326356252s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (158.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (156.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-640155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-640155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (2m36.214615699s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (156.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-237143
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (102.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-078843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1031 00:04:13.679405  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1031 00:04:14.583494  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1031 00:04:30.631624  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-078843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m42.153639541s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (102.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-225140 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1852d63-bb78-4886-ab14-d077297eb4dd] Pending
helpers_test.go:344: "busybox" [b1852d63-bb78-4886-ab14-d077297eb4dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1852d63-bb78-4886-ab14-d077297eb4dd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.035896076s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-225140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-225140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-225140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-640155 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [947d60e6-fd58-4eef-a133-d13c29d23138] Pending
helpers_test.go:344: "busybox" [947d60e6-fd58-4eef-a133-d13c29d23138] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [947d60e6-fd58-4eef-a133-d13c29d23138] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.031164652s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-640155 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-640155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-640155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.189854964s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-640155 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-078843 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ac0523db-98c6-4583-8cc4-b0cd6bea7a8b] Pending
helpers_test.go:344: "busybox" [ac0523db-98c6-4583-8cc4-b0cd6bea7a8b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ac0523db-98c6-4583-8cc4-b0cd6bea7a8b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.022632768s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-078843 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-078843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-078843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.178008418s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-078843 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-892233 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-892233 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m0.435162136s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (791.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-225140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-225140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m11.392472474s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-225140 -n old-k8s-version-225140
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (791.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-892233 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d889c88d-3a5f-4450-a2b1-e6e2dc089011] Pending
helpers_test.go:344: "busybox" [d889c88d-3a5f-4450-a2b1-e6e2dc089011] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d889c88d-3a5f-4450-a2b1-e6e2dc089011] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.024411613s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-892233 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-892233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-892233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.111694354s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-892233 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (878.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-640155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-640155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (14m38.650343314s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-640155 -n no-preload-640155
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (878.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (528.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-078843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1031 00:08:57.631903  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1031 00:09:14.583473  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-078843 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (8m47.983092216s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078843 -n embed-certs-078843
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (528.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (506.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-892233 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1031 00:11:51.232644  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1031 00:12:08.184737  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
E1031 00:14:14.583991  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/functional-167609/client.crt: no such file or directory
E1031 00:14:30.631704  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1031 00:17:08.185587  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-892233 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (8m26.629075177s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-892233 -n default-k8s-diff-port-892233
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (506.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (66.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-558362 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1031 00:32:08.184999  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/ingress-addon-legacy-371910/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-558362 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m6.525976922s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (66.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m6.415946988s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-558362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-558362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.585234004s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-558362 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-558362 --alsologtostderr -v=3: (10.199875611s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-558362 -n newest-cni-558362
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-558362 -n newest-cni-558362: exit status 7 (86.470914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-558362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (57.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-558362 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-558362 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (56.839843067s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-558362 -n newest-cni-558362
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (57.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-740627 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-740627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h659r" [f38e8be9-55e0-4eaa-9970-0b2673022c33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h659r" [f38e8be9-55e0-4eaa-9970-0b2673022c33] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.013322922s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-740627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m11.546820312s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-558362 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-558362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-558362 -n newest-cni-558362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-558362 -n newest-cni-558362: exit status 2 (303.530836ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-558362 -n newest-cni-558362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-558362 -n newest-cni-558362: exit status 2 (280.953937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-558362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-558362 -n newest-cni-558362
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-558362 -n newest-cni-558362
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (105.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m45.853073244s)
--- PASS: TestNetworkPlugins/group/calico/Start (105.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (117.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1031 00:34:30.631201  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/addons-780757/client.crt: no such file or directory
E1031 00:34:34.931401  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:34.936768  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:34.947049  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:34.967357  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:35.007737  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:35.088604  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:35.249148  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:35.569490  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:36.210374  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:37.490990  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:40.051791  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:45.173015  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:34:55.413959  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
E1031 00:35:01.981712  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:01.987321  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:01.997664  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:02.018729  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:02.059379  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:02.139825  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:02.300824  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:02.621339  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:03.261678  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:04.542889  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:07.103409  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
E1031 00:35:12.224606  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m57.99333438s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (117.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-k5vfk" [233dbdd6-2700-43cc-a45c-5f9b6b2b075c] Running
E1031 00:35:15.895087  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023648251s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-740627 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-740627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4ggkr" [0753ab29-5388-4fdc-82e4-457ad55f5ff6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1031 00:35:22.464776  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-4ggkr" [0753ab29-5388-4fdc-82e4-457ad55f5ff6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.018796144s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-740627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (103.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1031 00:35:56.855674  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/old-k8s-version-225140/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m43.238441905s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (103.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9gsn4" [7d41b2a9-8eeb-4c56-b394-ea94612e8dbf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.026437143s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-740627 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-740627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8qv2v" [177889cc-00b7-4e0c-b581-ef019e0dc1c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8qv2v" [177889cc-00b7-4e0c-b581-ef019e0dc1c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.01421023s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-740627 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-740627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wcfhz" [6912aa62-a64d-4dd3-bae2-eef7aa9e272e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wcfhz" [6912aa62-a64d-4dd3-bae2-eef7aa9e272e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.019386348s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m25.314258706s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-740627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-740627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-740627 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m10.804078519s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-740627 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-740627 replace --force -f testdata/netcat-deployment.yaml
E1031 00:37:37.275473  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8n5dl" [c2c3f823-ec37-4538-9a5f-105cc84ef4db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8n5dl" [c2c3f823-ec37-4538-9a5f-105cc84ef4db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.010312742s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nr5zn" [5da5781d-7fad-42e5-80ab-ff07073eae04] Running
E1031 00:37:45.828091  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/no-preload-640155/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.022934764s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-740627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-740627 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-740627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lskcr" [90094190-1bfc-40df-8b2a-2a0f1b87b4ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lskcr" [90094190-1bfc-40df-8b2a-2a0f1b87b4ed] Running
E1031 00:37:57.756233  216005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/default-k8s-diff-port-892233/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.016928801s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-740627 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-740627 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k2k5g" [127c9710-266d-48e6-844a-47ab7970a130] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k2k5g" [127c9710-266d-48e6-844a-47ab7970a130] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.011700782s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-740627 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-740627 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-740627 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.209283327s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-740627 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-740627 exec deployment/netcat -- nslookup kubernetes.default: (10.216942699s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-740627 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (36/292)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.3/cached-images 0
13 TestDownloadOnly/v1.28.3/binaries 0
14 TestDownloadOnly/v1.28.3/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
238 TestStartStop/group/disable-driver-mounts 0.15
254 TestNetworkPlugins/group/kubenet 4.65
262 TestNetworkPlugins/group/cilium 5.25
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-221554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-221554
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-740627 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-740627" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Oct 2023 23:59:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.61.111:8443
name: pause-511532
contexts:
- context:
cluster: pause-511532
extensions:
- extension:
last-update: Mon, 30 Oct 2023 23:59:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: context_info
namespace: default
user: pause-511532
name: pause-511532
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-511532
user:
client-certificate: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.crt
client-key: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-740627

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-740627"

                                                
                                                
----------------------- debugLogs end: kubenet-740627 [took: 4.475046623s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-740627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-740627
--- SKIP: TestNetworkPlugins/group/kubenet (4.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-740627 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-740627" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17527-208817/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Oct 2023 23:59:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: cluster_info
server: https://192.168.61.111:8443
name: pause-511532
contexts:
- context:
cluster: pause-511532
extensions:
- extension:
last-update: Mon, 30 Oct 2023 23:59:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0-beta.0
name: context_info
namespace: default
user: pause-511532
name: pause-511532
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-511532
user:
client-certificate: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.crt
client-key: /home/jenkins/minikube-integration/17527-208817/.minikube/profiles/pause-511532/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-740627

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-740627" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-740627"

                                                
                                                
----------------------- debugLogs end: cilium-740627 [took: 5.065148736s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-740627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-740627
--- SKIP: TestNetworkPlugins/group/cilium (5.25s)

                                                
                                    
Copied to clipboard